Tuesday, September 05, 2006

Javascript Autocast

A typical day at work requires me to constantly switch between writing code in C# and Javascript. This effects the way I write code in both languages. Since C# is more strict that Javascript, my coding style leans towards C# and sometimes I find myself writing Javascript code in C# style, which can be viewed as good or a bad thing.

Every language has its goodies that if we take advantage of can produce better quality code.

Let's take a common task as example. We want to test a string variable for null and empty.

In C#,I will do like:


string str;
...

if (str != null && !str.Equals(string.Empty)) { }


Applying the same C# style to Javascript will result in the following code:

var str;
...
if (str != null && str != '') { }

The above code is not optimized. In Javascript, we can utilitize 'autocast' feature that will reduce the code into:

var str;
...
if (str) { }

A null or an empty string autocasts into false, and therefore the condition is evaluated as false.

As you can see, the code is much shorter and cleaner. Size does matter in Javascript. Shorter code translates into faster download time.

Autocast also applies to other types/conditions:

  • Undefined variable is evaluated as false. A variable is undefined if it hasn't been assigned a value.
    Example:

    var myVar;
    if (myVar) { } // false


  • Empty string is evaluated as false

    var myStr = '';
    if (myStr) { } // false

    var myStr2 = 'ABC';
    if (myStr2) { } // true

  • Zero is evaluated as false

    var myNum = 0;
    if (myNum) { } // false

    var myNum2 = 1;
    if (myNum2) { } // true

  • Null is evaluated as false

    var myObj = null;
    if (myObj) { } // false

  • Object is evaluated as true

    var myObj = {};
    var myObj2 = new Object();
    if (myObj) { } // true
    if (myObj2) { } // also true


  • Empty array is evaluated as true
    An array is essentially an object, therefore it is evaluated as true.

    var myArray = [];
    var myArray2 = new Array();
    if (myArray) { } // true
    if (myArray2) { } // true

Friday, September 01, 2006

Javascript optimization

The Internet Explorer Team (the team that brings us IE 7) has posted a nice article about Javascript optimization. This is the first part of the scheduled 3 parts article. In this article, the main drive behind the optimization is to reduce the number of symbolic lookups made by the Javascript engine to map the variable name to its real object.

It is quite rare to find such optimization tips from the maker of the browser. I am glad finally IE team pays some attention to improve the Javascript development. We need more articles like this, as more and more Javascript code is written nowdays. In my current project which is using ASP.NET 1.1, about 75% of the UI code is written in Javascript, and the rest is the C# code behind.

Please find the article here

Tuesday, August 29, 2006

Know your end users

On a Saturday morning, I went to a bank to do some over the counter transaction. I was attended by a customer service staff who used a Dell desktop computer with an LCD monitor.

My request takes many forms to be filled and a lot of data entries to be made to the system. I couldn't see the monitor, but I just imagine by looking at how busy the female banker was entering data using the keyboard. Surprisingly, she didn't use the mouse at all and rely entirely on the keyboard and its function keys (F1-F12). The mouse is connected to the computer, but she put it in front of the keyboard so I believe she just want to make some space by getting rid of it.

On another ocassion in a travel agency, I notice a diffent balance between keyboard and mouse usage. I was booking for an airline ticket and the staff attending me used both keyboard and mouse. However, he used the keyboard much more frequently to enter not-so-user-friendly commands on the terminal window and only few times used the mouse to click on the big toolbar located on top of the terminal window. Perhaps the toolbar is used to execute a simple command like 'Print Flight Itinerary'.

Drawing from the two short scenarios above, we can see that different users have different way to use the application. Naturally, the platform of an application defines its limitation, like in a terminal window where everything is text, keyboard is definitely the main input device. However, in most today's applications, desktop-based or web-based, the mouse and the keyboard are both acceptable input devices. But still many people choose to mainly use keyboard alone. They do have valid reasons, most probably because they are so familiar with the keyboard and therefore can operate faster compared to using the mouse.

It is paramount for us, the software developers, to know the behaviour of the end users who will actually use the application we build. Imagine if we develop a cool interactive web application, fully enriched with DHTML popups, animations, and drag and drops, only to realize later after the release that the users prefer to navigate using combinations of keyboard arrows and tabs rather than the mouse.

I highly recommend developers, who spent most of the time behind the stage, to come out from their cubicle and pay a visit to client office. Look at how your end users use the application that you build. I bet you will be surprised and it may change the way you design and develop your application.

Tuesday, August 22, 2006

Visual Studio 2003 SP1 is finally here

Finally, the long overdue, first and possibly the last, and much anticipated Service Pack 1 for Visual Studio 2003 is released. You can download the 156MB package from the download page. The service pack offers no new features, but it fixes many bugs listed in the bug list.

One of the fix that looks promising is no. 832714: "Visual Studio cannot open a Web site if a duplicate Web site exists". This problem often happens when I opened a fresh new web project from the SourceSafe and I manually creates virtual directory for the web project. However, I haven't tested this fix yet.

The installation of SP1 requires Visual Studio 2003 CD 1, so make sure you keep it handy. If your team has several developers, it's more efficient to copy the CD to a shared network drive and point the installer to look at the shared drive.

Saturday, August 05, 2006

Check an HTML element is in view

ajaxMost of the web page that I created contains a form to let the user enters data. Generally, I put a customized ASP.NET's validation summary control to validation errors in the unified way. Since this is a customized control, I also use the control to display the status of an AJAX operation whether successful or not.

Whenever I display a message, I want to immediately grab user's attention. Initially, I call object.scrollIntoView. This is an IE specific method that scrolls the screen so that the corresponding object is in view. As the result, the element is either aligned at the top or bottom of the screen. This is OK, but what happen next is when the object is already in view, the screen jiggles and changes the position so that the object is position on the top. Quite an annoying experience for the user!

So I write this Javascript function that will check whether an object is in view.

The function accepts two parameters: the reference to the object (HTML element) and a boolean value bWhole. If you set bWhole to false, the function will only check the top-left corner of the element. It will return true if the top-left corner of the element is within the viewable area. However, if bWhole is to false, the function checks both top-left and bottom-right corners are within viewable area.


function isInView(o,bWhole) {
if(typeof(o)=='undefined' !o) return false;
if
(typeof(bWhole)=='undefined') bWhole=false;
var
x1=o.offsetLeft;
var
y1=o.offsetTop;
var
p=o;
while
(p.offsetParent) {
p
=p.offsetParent;
x1+=p.offsetLeft;
y1+=p.offsetTop;
}
var x2=x1+o.offsetWidth-1;
var
y2=y1+o.offsetHeight-1;
var
left=document.body.scrollLeft;
var
right=left+document.body.clientWidth-1;
var
top=document.body.scrollTop;
var
bottom=top+document.body.offsetHeight-1;
return
(bWhole)? (x1>=left && x2<=right && y1>=top && y2<=bottom) : (x1>=left && x1<=right && y1>=top && y1<=bottom) (x2>=left && x2<=right && y2>=top && y2<=bottom);
}

Monday, July 31, 2006

UI Design Patterns: Hierarchical Master Detail

Problem Summary

The user needs to traverse through a hierarchical or tree-like structured data and do CRUD (Create, Read, Update, and Delete) operations.


Screenshot (click to enlarge)

Use When

  • The data is structured in a hierarchical manner. Example: organization structure, web pages.
  • The amount of fields/columns in the data is sufficient enough to be displayed as a form in one screen-length.

Don't Use When

  • The data is flat
  • The data consists of very few fields, e.g. only key and value, or consists of many fields that span across several screen length.

Solution

  • Divide the screen into two columns.
  • The left column contains a tree view control to let the user navigates through the hierarchical data. On top of the tree view control is a toolbar (or collection of buttons) to work on the tree view. In the screenshot, it has "Add Child" to add a child node, "Add Root" to add root node, and "Remove" to remove a node.
  • The right column contains a form that the user can use to add new data and update existing data
  • The form on the right will only appear when there is selected node on the left.

Suggested Improvement

  • Drag and drop among nodes in the tree view. Drag and drop is a sophisticated operation that moves a branch of hierarchical nodes from one parent to another parent.
  • Clone node feature for faster creation of new data. Instead of always starting with blank form, the user is aided with a copy of data from another node.
  • Load on demand treeview to handle large hierarchical data.
  • Banding to improve the performance when there are too many children under one parent node
  • Hierarchical delete. When the parent node is removed, all its direct and indirect children nodes are also deleted. If this is not possible, then the user should only be able to delete from the bottom-up.

Sunday, July 30, 2006

Enterprise UI Design Patterns anyone?

As mentioned in my previous post, I have been closely following the UI design patterns published on the Internet. These patterns are really problem solver, however, I feel the majority of the patterns do not apply to the type of web application I am building.

In my current company and previous ones, I build web application for business use. This type of application have their own distinct characteristics and problems:
  • Manipulates a lot of data in form layout
  • Majority of the operations are CRUD (Create Read Update Delete)
  • Data validations
  • Hierarchical and flat data structure
  • Deals with master-detail relationship
  • etc.

I have been identifying some UI patterns that my team and I often use in the web application to tackle the same repeating problems. In the next posts, I will start documenting those UI patterns so that my reader can benefit from it. Feel free to comment on my patterns as I always need to continuously improve them.

Tuesday, July 25, 2006

UI Design Pattern Galore

Nowadays, there has been a growing number of web sites that collect UI design patterns. The UI design patterns are common solution to recurring problems when designing user interfaces. Whenever I face a usability issue or make a judgement on screen design, I always recall to the collection of UI design patterns. Therefore, it is so useful to bookmark web sites that collects design patterns. My top three sites on the list are:

Yahoo! Design Pattern Library
It has a lot of interesting (read: advanced) patterns like drag and drop, animations, etc. It is good to discover what today's web applications can do. When you are ready to put the patterns into practice, try the companion Yahoo! UI library.

Designing Interfaces
I read the book with the same title first before discovering this site. This is the book/site to discover more UI patterns beyond web page.

Patterns in Interaction Design
Plenty of patterns and tons of screenshots make this site a good reference.

Wednesday, July 19, 2006

Microsoft hasn't abandon us!

Better late than never. According to this Microsoft blog, Microsoft will release Service Pack 1 for VS.NET 2003. The beta has been out for some time and finally they decide to release it on 15 Aug 2006, if no futher delay introduced.

This is certainly good news for me, who is still using VS.NET 2003. I have been struggling in the past few weeks because of the infamous "Unexpected error creating debug information file ... The process cannot access the file because it is being used by another process". In my case, aspnet_wp.exe process is locking the PDB file, so everytime I want to build the solution I need to kill the process manually. Furthermore, whether this is related or not, my VS.NET debugger is not working properly. When I mouseover a variable, the value does not come correctly. The value will appear much later after I traverse down several line of codes, so the QuickWatch and Watch become useless. This problem is not yet solved until now although I already took drastic measure to uninstall and install VS.NET.

Finger crossed VS.NET 2003 SP1 will fix those problems, otherwise I have to live with it until the next upgrade to VS.NET 2005 :(

Thursday, July 13, 2006

Web server and database server time difference

Time difference between web server and database server can cause hard-to-find bugs. This is what I experienced recently after deploying a web application to a live server. Unlike our development server where IIS and SQL Server reside in one machine, in the live server we have a SQL Server sitting in one machine and IIS sitting in another machine.

For some reasons, time syncronization in both servers (Windows 2003) did not work and as a result there was significant time difference between the two servers.
Because of the time difference, application features that depend on comparison between current time and stored datetime value will not work properly. For example, when I save User's password expiry date, I call DateTime.Now from the application (using the web server's time) and save the value to the database. In the stored procedure, I check if a password is expired using statement like:

-- check if password is expired
IF @PwdExpDate <= GETDATE()
    
-- password is expired
    
ELSE
    
-- password is still valid



Since Password Expiry Date is set in the web server and then compared in the database server, this comparison does not work properly due to the time difference.

Although fixing time difference between the two servers is easy, I started to think whether we can always safely assume that both the web server and database server has the same time. Or do we need to introduce a programming guideline here:

"Datetime that is set in one machine can only be safely compared to the current time from the same machine"

In my case, the above guideline implies to moving the comparison logic from stored procedure to the application layer, or the other way around, setting the expiry date at the stored procedure and perform the comparison in the stored procedure as well.

Friday, June 30, 2006

Ordering Category in CodeSmith

When I worked on a CodeSmith template (*.cst), I came into a small problem to order categories in a property grid. The template I created has several categories and I want it to be displayed in a specific order, not ordered alphabetically as per default.

I came across this discussion that gave me a nice trick to order the categories. We need to prefix the category name with a special character that does not get displayed by Property Grid. Aab character (\t) will do the trick. Multiple tab characters might also be used. So for example if I want to order my categories in the following order:

Context
Persister
BusinessEntity
UnitTests

Then in CodeSmith template I need to write like:

<%@ Property Name="SourceTable" Type="SchemaExplorer.TableSchema" Category="\t\t\tContext" %>
<%@ Property Name="CreateBusinessEntity" Type="System.Boolean" Category="\t\tBusinessEntity" Default="True" %>
<%@ Property Name="CreatePersister" Type="System.Boolean" Category="\tPersister" Default="True" %>
<%@ Property Name="CreatePersisterUnitTest" Type="System.Boolean" Category="UnitTest" Default="True" %>

Thursday, June 15, 2006

Ajax and Auto Save

The following article summaries our experience in working with auto save feature in the web application. New technology and new features always bring fresh challenges to developers and also good things for users, but they may also raise new issues that have never come up before. I hope by sharing this experience with you, you will be more aware with the issues while working on similar feature.


One of the recent challenge that my team had was to handle session timeout issue when our users spend too much time on a web form. As a background, a user session will time out when there is no communication (client request) with the server for a certain period. When time out happens, the user has to log in again and this action potentially destroys any unsaved changes the user has made on the web form.


We don't favor to increase the session time out, since this solution imposes a greater risk to our application. What we need is a feature or two that can handle session time out gracefully.


We come up with the idea of auto save after experiencing GMail for a while. The auto save feature has been in Microsoft Word as long as I can remember, so we often take it for granted, but it is a new and sexy feature for the web application.


In case you haven't seen how auto save works in GMail, this feature runs silently in the background. It will detect changes in the email content and periodically send the content to the server for saving. A short, non obstructive message will appear to indicate that the auto save is done.


So we made our mind to implement auto save in our web form. The web form is much more complex that Gmail's form. In the form, we have about 25 fields (textbox, dropdownlist, textarea) and a rich text box that can contains HTML.


Briefly this is what we did:


  1. A timer in set in Javascript that will invoke a function called saveData() every n-minutes.

  2. saveData() calls a home-grown Javascript form utility to extract the values of all fields into XML.

  3. We use AJAX to send the XML to the server.

  4. The server receives the XML and based on the status of the data does either one of the following:

    1. If the user does not explicitly save the data, we save the XML into a table that is created for the sole purpose of saving data temporarily.

    2. If the user has explicitly save the data before (for example, by clicking on the save button), we deserialize the XML into business object, and use the data access layer (DAL) class to save the data into proper tables.


  5. Upon completing the operation, the server returns a return status and the javascript displays the auto save message.


In my observation, it is important that the auto save message does not distract from the user's focus. We don't want the user to lose focus on the fields he/she is working only for the sake of auto save. The auto save feature should be made as transparent as possible.


Point 1-5 above is what we did initially with the auto save. When we used this feature for a while, we noticed that the audit log was filling up fast. This is because we always save to the database regardless whether the data has been changed.


We have two options to solve the problem. First option is to compare the values of the fields in the client side and send the XML only if we find differences. The second option is to compare the values in the server side. While the second option requires less effort, we find that this solution has potential issues. As the value comparison is done in the server side, the client needs to constantly send the XML data to the server side. This will increase the traffic and make the server busy unnecessarily. It will also make the user session never expires, opening the application for further exploitation. The first option is indeed more challenging but it will create a more efficient traffic and the user session still works properly.


Let's see the beauty of auto save in the following mini case study:


A user logs in to the application, enters data in the form, but he hasn't press the save button. After n-minutes, the auto save triggers and saves the data to the temporary table. Suddenly, there is a network disruption and the user loses his session.


After the network disruption is over, the user logs in to the application again and revisit the same form. This time the application checks the temporary table and finds the auto save data that belongs to the user. The application pops up a message, informing the user whether he/she want to recover the data or continue with the blank form. If the user chooses to recover the data, the auto save data is loaded and populated to the form. However, if he chooses to start with blank form, the auto save data is deleted.


Giving options to recover data or to start with a blank form is a nice to have feature, because in some cases the lost data is not worth to recover.


Does auto save solve session timeout issues? I can say partially. The session will still time out if the user does nothing, but with auto save the user will not lose all data. I have been thinking of another feature that can warn the user when the session is about to time out. This feature will complement auto save to make a really user friend form.

Friday, May 26, 2006

Sharing Visio Diagram

In my current company, I create a lot of database and class diagram using Visio for Enterprise Architects (the one that comes with VS.NET 2003 Enterprise Architect) and share it to other people. The fellow developers who have VS.NET 2003 Professional always have problem viewing the file, because they don't have Visio installed in their machine.

Installing Visio Viewer 2003 won't help much. You can open Visio file from inside Internet Explorer, but the result is often unpredictable: contents of the entity diagram tends to overflow the table boundary, the line thickness is not correct, etc. Basically, I am not satisfied. The only solution is to print on paper and distribute, until now.

I just discovered (doh!) that we can publish Visio diagram as web pages. The result is not much different as the original diagram viewed inside Visio. I do notice some minor defects like dashed line is converted to solid line. But so far the defects are insignificant.



In case you haven't discovered, to export Visio diagram to web page, just choose File - Save As Web Page.

Inside the dialog box, you can set some parameters. In my observation, some options don't really affect the performance. I can't make the Custom Properties working as well, so I opt out this option to remove the empty space reserved for this feature.

Visio will create a lot of html and vml file and pack them into a folder. I move this folder to our development web server and instantly everyone in the team can access the Visio diagram. The diagram is drawn using VML (Vector Markup Language) so it still looks nice even when we zoom in/out.

Sunday, March 05, 2006

Managing ASHX files

In the web application that I am working on, a web page is composed of several user controls. The composition happens in the runtime and is driven by metadata stored in a database. As reconstructing a web page during postback is quite expensive, I opted to out-of-band AJAX requests for all our asyncronous requests.

Consequently, there is a growing number of ASPX created solely to handle AJAX requests in our web project. Initially, I created a folder called "Handlers", containing all ASPXs that handle out-of-band AJAX request, separating from the normal ASPX files. I also suffix the file name with Handler, like 'LookupHandler.aspx' to further separate between ASPX handler and normal ASPX.

I am aware about ASHX as an option to handle out-of-band AJAX request instead of using ASPX. A lot of people say that ASHX is simpler, therefore it should run faster. I need to see a benchmark to support this though.

Although ASHX seems to give some performance benefit over ASPX, I was initially discouraged to use this in our projects or even recommend this approach to my fellow developers. ASHX is not supported by VS.NET 2003. There is no file template to begin with, and the worst is there is no Intellisense support. This thought revolved around me until after some time I got spare time to do more experiment with ASHX.

Not to my surprise, ASHX supports code behind. But the code behind just does not work as seamlessly as ASPX. I can't make the code behind file a child of the ASHX file in the web projects like:


ContactHandler.ashx
|
+- ContactHandler.ashx.cs


This does not work.

What I can do is to structure the ASHX and code behind at the same level. Not as nice as the way ASPX is structured, but still acceptable.

ContactHandler.ashx
ContactHandler.ashx.cs

To have the ASHX working properly with code behind, we have to add class attribute in the declaration:

ContactHandler.ashx only contains 1 line:


<%@ Webhandler language="c#" class="JLTi.iClaims.Handler.LookupHandler" %>




ContactHandler.ashx.cs contains the actual code to handle out-of-band AJAX request:

using System;
using
System.Web;

namespace
Experiment
{
public class ContactHandler : System.Web.IHttpHandler
{
#region IHttpHandler Members

public void ProcessRequest(HttpContext context)
{
context.Response.Write(
"Hello World from ASHX");
context.Response.End();
}

public bool IsReusable
{
get
{
return true;
}
}

#endregion
}
}



A better way to manage ASHX is to put the code behind file in another project (a class library) like:


Experiment.Handler
|
+-- ContactHandler.cs

Experiment.Web
|
+-- Handlers
|
+-- ContactHandler.ashx


In the above example, the ASHX file is put in the folder 'Handlers' inside the web project and the code behind file is put in a separate class library project. By structuring this way, we can easily version, share, and reuse the code in the code behind among several solutions.

Saturday, February 25, 2006

Why I use out-of-band AJAX requests

In current project, I use a lot of out-of-band AJAX requests. In an out-of-band request, individual request does not flow into its own standard ASP.NET page life-cycle but instead calls another page and follows the flow of the corresponding page.

There has been a growing debate regarding the pros and cons of out-of-band request. In the cons side, the out-of-band request breaks ASP.NET model. Developers do not code in the usual manner they are used to do for years. Instead they have to create another page to server AJAX request. In my company, we call this page as 'handlers' or 'AJAX handlers', or simply 'AJAX servers' to less-technical people :) Whatever the name is, programming this handler is usually more raw and messy since we have to let go some nice ASP.NET features like ViewState that makes web programming easier and more intuitive (more like an event-driven programming).

In the pros side, the out-of-band request is more efficient since it only carries data that the handler need. It does not need to carry hefty payload from ViewState. The server side processing is also more efficient since it does not to reconstruct the state of the whole control hierarchy. Moreover, the handler also promote reusability and clear separation of responsibility, since a handler's only responsibility is to provide correct response based on received request. It does not need to know how the UI is rendered. Thus a handler can be used by several UIs.

So which kind of request to choose? It really depends on how you structure the content of the page. In a common ASP.NET project where one screen in the specification translates into one ASPX page, then you can safely avoid out-of-band request. Microsoft ATLAS does this. The programming model does not change dramatically.

However, the moment you want to promote reusability, you will start using ASCX (user controls) inside the ASPX and later, to promote even further reusability, use custom controls. In this case, the out-of-band request is a better option (and perhaps the only way to implement AJAX request). Consider that it is too expensive to reconstruct the whole page (and reinstantiate all user controls/custom controls in the page) only to serve a single AJAX request.

Thursday, February 23, 2006

Visual Studio 2005 Licensing

This afternoon, we went to Microsoft office to have a discussion with Rashish Pandey, Product Marketing Manager (Developer Tools) for Microsoft Singapore. The agenda of the meeting is to discuss various options to buy Visual Studio 2005 for our plan to migrate to Visual Studio 2005.

Noted below are what I extracted from the discussion plus a few hours of surfing Microsoft site to seek further clarification. Bear in mind that those notes are my personal opinion and should not be taken as it is. If you are in the position of evaluating VS.NET 2005 licensing as well, I suggest to start from Microsoft site then seek more explanation from Microsoft representative.


To begin with, Visual Studio 2005 licensing scheme is per user/developer basis, not per installation basis. Simply speaking, if there are 20 developers, then we need 20 licenses to adequately cover all the usage, regardless on how many instances of Visual Studio is installed.

VS.NET 2005 comes in 4 editions: Express, Standard, Professional, and Team System. Visit Visual Studio 2005 Product Feature Comparisons for more comprehensive comparison among those editions. In my opinion, the Express and Standard editions are more suitable for hobbyist or individual developer working at home rather than for enterprise use. Both editions come bundled with SQL Express Edition, so we know how Microsoft positions these editions. Surprisingly, both Express and Standard editions support SQL Reporting Service, so theoretically they can be used to create and publish reports to SQL Reporting Service.

Beginning from the Professional Edition and up, there are SQL Server 2005 integration and XML/XSLT editor, two major features which are missing in the first two editions. Microsoft positions the Professional edition for individual developers, but I believe in practice due to overwhelming features and high price tag of the Team System edition, most companies will stick to the Professional edition.

In the high end side of the product line is Visual Studio Team System, which consists of 4 products offered in 5 different packaging. The products are Architect, Developer and Team Tester, each one targets specific role in the software development lifecycle, plus the Team Foundation Server to enable collaboration among those roles. You can buy the Team System Suite which consists of 3 Team System, one copy for each role, bundled together. The Team Foundation Server is sold separately and will be available in March 2006.

Users connecting to the Team Foundation Server needs a license known as CAL (Client Access License). Every individual Visual Studio Team System edition comes with 1 CAL, which means the user automatically has license to access the Team Foundation Server. The Professional edition does not include CAL, so you need to buy a CAL if you want the developer using the Professional Edition to use the Team Foundation Server.

Microsoft has a scheme called Software Assurance, which entitles you for free upgrade to the next version of the product as long as you has a valid subscription for the corresponding product. In VS.NET 2005, MSDN Subscription is a superset of Software Assurance. Other than offering free upgrade to the future release of Visual Studio, it also gives you phone-based support, newsgroup support, and a bundle of Microsoft operating systems, server products, betas, etc. licensed for development and testing only (Developer Edition). In my view, this is the most important benefit of MSDN Subscription. Developers can try various Microsoft products and run their applications in various environments without needing to buy licenses.

The Team System edition with MSDN Subscription bundle comes with a 5-user-limited edition of the Team Foundation Server called 'Workgroup Foundation Server'. This product is functionally equivalent to the Team Foundation Server, but it is limited for 5 users only. IMPORTANT NOTE: You cannot buy extra CALs to increase beyond 5 users.

Finally, there is a downgrade licensing scheme available for VS.NET. It means you can buy VS.NET 2005 to license your VS.NET 2003 installation. It sounds uncommon, but it might be useful in the situation where you still have projects in VS.NET 2003, not quite ready to jump to VS.NET 2005, and need more licenses to cover additional developers.

Thursday, February 16, 2006

Separation of Roles

As a software developer, sometimes I come into situation when I have to code in all layers/tiers of the application. In the web development context using Microsoft technology (where most of my experience is in), it means writing code in Javascript, deal with divs and tables in HTML, write business logic in C# and all the way down to writing stored procedures. Working in this manner is nothing bad, since I can get the full picture of the process, from the moment the user input the data to the data being saved in the database. Quite surprisingly, this happens in most of the companies that I have worked for, regardless of the project size.

The multi-responsibility role that a developer has to bear is quite common nowadays. Take a peak at the online job posts and you can easily notice that most developer job openings always look for the all-rounder candidate who can do from A-Z and has experience working in all tiers.

I agree in a small development project, we don't have the luxury of proper design and planning. Thus work items are actually screens from a prototype, and often a single developer is assigned to code the screen from UI tier to database tier. However, when the project gets larger and more developers join in, I suggest developers are split into several distinct roles:

1. UI/Front-end developers. These people are most experienced in event-driven nature of UI programming. In the web development projects, these are the type of developers who are fluent in client side scripting, prefer to hand code HTML code, fully understand the difference between a listbox and a dropdownlist, etc.

2. Middle tier developers. These people deals with business object classes, web services, and data access layer classes.

3. Database developers. They live in different side of the world than the other two types of developers. They speak only in TSQL. Their tools is Enterprise Manager and Query Analyzer instead of Visual Studio.


Those projects who have clear separation of roles enjoy the following benefits (which are derived from my experience working in such project):

1. The right man on the right place. Almost like a cliche, but it is true. Most developers will say 'yes' they can work in every tier, but in fact they are more effective working in one tier compared to other tiers. A developer's past experience can tell much about this. Face this fact: developers who write well in object-oriented Javascript may not effectively write a business object classes in C#, or even write a sophisticated stored procedure in TSQL with proper error handling management.

2. Promote separation of responsibility on each tier, an important concept in object orientation. Alhough only by a proper design a truly separation of responsibility can be achieved, having different developers on each tier will ensure there is no code that sit in the wrong place because they are written by different developers.

3. Each tier can be planned and progressed independently. For example, after the database design is done, the database developers can start working on the stored procedures. Meanwhile, once the class diagram is done, the middle tier guys can start creating classes. Usually, the UI developers will start later and finish later as they have to work on the prototype and go through iterative release-feedback process with the client.

4. Easier to implement programming standard and convention. Take this example: Naming convention for strongly-typed C# is different from loosely-typed Javascript. Syntax in TSQL are more effective in resultset, while C# developers are more used to loops.

5. Promote communication. In my previous company, developers on the same role sit next to each other on the same corner, thus promoting communication and code reuse among them. In a 20-people team which everybody works on the stored procedures, not everybody know what other people have done.


As I mention earlier, the separation of roles may not be suitable for a small project where resource is limited. It is also not an all-good solution. The following are some disadvantages:

1. Developers may know understand the whole picture as they only work in one tier instead of all tiers.

2. Harder to track bugs and performance issues that run across tiers and other performance issues because of point no 1. If bug tracking is not manage properly, developers may start finger pointing on each other.

Wednesday, February 15, 2006

GPL vs LPGL

The company I am working on is heading for the annual audit process. Apart from the security and compliance issues that the internal auditor needs to find, the auditors will find whether we have unlicensed and underlicensed applications. Underlicense situation will rise if the current licenses we have do not adequately cover all the software installation or deployment.

In our discussion, we come across several licensing schemes and surprisingly nobody has a clear understanding about them. Two most prominent licensing schemes: GPL (General Public License) and LGPL (Lesser GPL) seems to confuse many as they look similar but actually are very different.

The GPL license allows free use and modification of the software, as long as we credited back to the original author and release the application (that utilizes the GPL'ed software) as open source. This is a major road block for commercial project to use GPL'ed software.

On the other hand, the LGPL license does not require the application to be distributed as open source and thus can be used in a commercial application. This licensing is increasingly popular among software libraries. However, there is an extra complication: any modification or addition (and even wrapper) to the LGPL'ed library must also be released under LGPL/GPL scheme. Other parts of the software which do not use the library are not affected and therefore can be released under any licensing scheme. Therefore, in practice, if we create a wrapper class of the LGPL'ed library, or create a class that inherits from the library, then our library code must be released under LGPL scheme. In the .NET project, as a class library is compiled into an assembly, then the assembly and its source code must be released under LGPL. Not a pretty situation for a commercial project!

In summary, whenever your software project needs to use a third-party library/application, make sure you understand its licensing very well and judge carefully whether it fits with the licensing of your software.

Friday, February 10, 2006

A del.icio.us way to search

A simple idea turned into a great web application, that's what I like about del.icio.us


Del.icio.us is an online bookmark repository. Instead of adding your favourite urls to the browser's bookmark, you can post them to Del.icio.us and access them from any computers on the web. Simple concept, but that's not all. You can also search what other people bookmark and see how popular those bookmarks are.


Again, the power of virtual community is used here to leverage the usefulness of the site. The more people use Del.icio.us to post their bookmarks, the bigger the bookmark repository, and more useful search result can be returned. I believe Del.icio.us is waiting for that critical mass moment to happens, before they can truly reap the profit of running this service.


Alas, Del.icio.us' infrastructure doesn't seem to cope well with the site growth. In the past few weeks, the site was down several times due to power failures in their colocation servers. These issues should be taken care seriously before people giving up and turning to alternative sites.


Infrastructure issue aside, it is a great experience to use Del.icio.us on my daily surfing activity. I use the site to:

  • Manage my bookmarks. I surf at work and home, so having a single bookmark repository is really helpful. Assigning tags also helps me to categorize the bookmarks based on the keyword, which I find is easier to remember than file system-like structure in the classic bookmark system.
  • Share my technical bookmarks to my colleagues more easily. Just point them to my Del.icio.us url. No need to cut and paste the link in the email or IM. However, privacy is a big concern here because they can also get my personal bookmarks
  • Do search on the internet. See what other people bookmarking! Think Del.icio.us as an alternative to Google search.
  • Learn about the web interface... simple but powerful. More about this in the next post.

Other that privacy issue, my other concern is if some people start to misuse the system by posting urls for advertising rather than their actual bookmarks. More and more people post the same url, the more popular the bookmark is. This will taint the usefulness of the site as an alternative way to search the Internet.

New Year, New Life, New Resolution

One of my new year's resolution is to blog more actively. Looking back at my first post in April 2005, it was 10 months ago and there is only 5 messages so far. What a shame... I should do better this year!

In 2006, I also start a new life, as a married man to a lovely wife Josepha. It has been about a month since we moved from Novena to an HDB flat in Braddell. Initially, the flat was pretty empty. Only basic furnitures like bed, L-shaped sofa, washing machines and fridges are left by the owner and previous tenant. But after a few trips to IKEA shop, we finally furnished our unit to a comfortable state.

Now, I have a place to put the laptop and YES, we have a broadband Internet connection at home. So let's start blogging :)