Monday, November 13, 2006

Passed 70-431

This Saturday, I passed my first exam in the new certification series: TS: Microsoft SQL Server 2005 - Implementation and Maintenance (70-431).

This is so far the hardest exam I took. Don't get me wrong. I have taken 5 Microsoft exams before and one of them is exam 70-229 using SQL Server 2000. However, since my background is more as an application developer rather than a database administrator, I found the exam very challenging. Unfortunate for me, it just cover a little bit of TSQL programming, the material that I am familar most. Most of the exam material focuses on the administrative/maintenance tasks such as backup/restore, indexing, and various high availability technologies.

Lucky for me, I still managed to score 947 out of 1000.

Here are few useful tips for the exam takers:

  • If you're unfamiliar with the administrative/maintenance stuff, get a good book that explain the concepts rather than just giving you facts. By learning this way, you don't need squeeze your brain to understand all the new terminology. I highly recommend the MCTS Self-Paced Training Kit (the 'blue' book). Find out about the book here.
    I also read the Exam Cram (the 'red' book), but it is not that helpful for me.
  • Do some practice. Follow the practice/lab from the book (if any). I find that doing practice help me to grasp the concept quicker. I prefer to see things get executed than just believe what the author write in the book. Not convinced yet? Go to the next point
  • Expect to see simulation questions... and a lot of them. So you really need to practice.
  • Download the latest SQL Server Book Online and read additional material not covered in the book, but don't spend too much time studying this manual. Admit that you don't have time and energy to study everything. It's OK to miss one or two difficult questions, but don't miss the easy ones because you don't have time to study that chapter.
  • Still related to the previous point... Don't try to remember all the switches, parameters, options, etc. from a SQL syntax. They are just too many... Only remember the important ones. How to identify the important ones? Read the book. If a very detail question appears on your exam and you really have no idea, use your intuitive or skip the question for later review. You may find the clue in the other questions.
  • As always, Microsoft is proud of the new technologies/features, so put emphasizes on the new features, such as XML data type, database mirroring, and some enhancements in the TSQL syntax.
  • Write on a piece of paper (OK you can use Notepad) all the important dynamic management views and functions. Read the note later when you are about to enter the exam room.

Hope this helps.

Tuesday, October 17, 2006

Scripting SQL Server database with ScriptTableData.cst

Recently, I worked on SQL scripts for database deployment. I use VS.NET 2003's Database Project to maintain the Create Script and Change Script in the SourceSafe. Those scripts will be run automatically by the build tool (NAnt) to create the database from scratch at any time and update the existing database as required.

The database project in VS.NET is an excellent tool to script database objects like tables, views, UDFs and stored procedures. You just need to drag and drop those objects from the Server pane to the Solution pane and automatically all the required SQL scripts will be generated and added to the solution. However, it fells short when I use the database project to script the data. The data is stored as binary file, so I lose control of it.

Surprisingly, Codesmith 2.6 (the free edition) comes with a template called ScriptTableData.cst that will do exactly what I want: to script the data as SQL statements. It produces INSERT INTO... SELECT statements that can be run on the target database to populate the data.

Unfortunately, the ScriptTableData template contains some annoying bugs. For example, the bit data type is scripted as "true" or "false" instead of "1" or "0", causing errors when you execute the produced SQL script. Moreover, it does not handle datetime and binary datatype very well.

I decided to modify the script to help my work. The following is the modified version of ScriptTableData.cst. All good credit goes to the maker of this excellent code generator, Eric J. Smith. I only fix some bugs and add few more features.

Bugs fixed:

  • Boolean data type is handled properly
  • SET IDENTITY_INSERT appears only for the table with identity column

Improvement:

  • More precise representation for DateTime data type
  • Support binary data type
  • Support two style of scripting: Compact and Verbose. The verbose style will produce SQL script similar to the commercial tools.

The template:


<%@ CodeTemplate Language="C#" Debug="True" TargetLanguage="T-SQL" Description="Generates a script based on the data from a table." %>
<
%@ Property Name="SourceTable" Type="SchemaExplorer.TableSchema" Category="\tContext" Description="Table to get the data from." %>
<
%@ Property Name="ScriptType" Type="ScriptTypeEnum" Category="Option" Default="Compact" Description="How the script is rendered" %>

<
%@ Assembly Name="SchemaExplorer" %>
<
%@ Assembly Name="CodeSmith.BaseTemplates" %>
<
%@ Assembly Name="System.Data" %>
<
%@ Import Namespace="SchemaExplorer" %>
<
%@ Import Namespace="System.Data" %>
<
%@ Import Namespace="System.Text" %>
<
%@ Import Namespace="System.Collections" %>

<
%
bool
hasIdentity = HasIdentityColumn();
string tableName = string.Format("{0}.[{1}]", GetTableOwner(), SourceTable.Name);
string
columnList = GetColumnList();
%>
USE <%= SourceTable.Database.Name %>
GO

DELETE
<%=tableName%>
<
% if (hasIdentity) {%>
DBCC CHECKIDENT ('<%=tableName%>', RESEED, 1)
SET IDENTITY_INSERT
<%=tableName%> ON
<%}%>

<
% if (ScriptType==ScriptTypeEnum.Compact) { %>
INSERT INTO <%=tableName%> (<%=columnList%>)
<% for (int i = 0; i < SourceTableData.Rows.Count; i++) { %>
SELECT <%= GetTableRowValues(SourceTableData.Rows[i]) %><% if (i < SourceTableData.Rows.Count - 1) { %> UNION<% } %>
<% } %>
<
%} else { for (int i = 0; i < SourceTableData.Rows.Count; i++) { %>
INSERT INTO <%=tableName%> (<%=columnList%>) VALUES (<%=GetTableRowValues(SourceTableData.Rows[i])%>)
<%}}%>

<
% if (hasIdentity) {%>
SET IDENTITY_INSERT <%=tableName%> OFF
<%}%>
<
script runat="template">

public enum ScriptTypeEnum
{
Compact,
Verbose
}

private DataTable _sourceTableData;

private DataTable SourceTableData
{
get
{
if (_sourceTableData == null)
{
_sourceTableData = SourceTable.GetTableData();
}

return _sourceTableData;
}
}

public string GetColumnList()
{
ArrayList columnList = new ArrayList(SourceTable.Columns.Count);
foreach(ColumnSchema column in SourceTable.Columns)
{
columnList.Add(string.Format("[{0}]", column.Name));
}
return string.Join(", ", (string[]) columnList.ToArray(typeof(string)));
}

public string GetTableRowValues(DataRow row)
{
StringBuilder rowBuilder = new StringBuilder();

int columnCount = SourceTable.Columns.Count;
ArrayList valueList = new ArrayList(columnCount);
for (int i=0; i<columnCount; i++)
{
ColumnSchema column = SourceTable.Columns[i];
if (row[i] == DBNull.Value)
{
valueList.Add("NULL");
}
else
{
switch (column.NativeType.ToLower())
{
case "bigint":
case "decimal":
case "float":
case "int":
case "money":
case "numeric":
case "real":
case "smallint":
case "smallmoney":
case "tinyint":
// numeric type
valueList.Add(row[i].ToString());
break;

case "bit":
// boolean type
string val = ((bool) row[i]) ? "1" : "0";
valueList.Add(val);
break;

case "varbinary":
case "binary":
// binary type
valueList.Add(GetHexStringFromBytes((byte[])row[i]));
break;

case "datetime":
case "smalldatetime":
// datetime type
DateTime dt = (DateTime) row[i];
valueList.Add(string.Format("'{0:yyyy-MM-dd HH:mm:ss.fff}'", dt));
break;

default:
// other type
valueList.Add(string.Format("'{0}'", PrepareValue(row[i].ToString())));
break;
}
}
}
return string.Join(", ", (string[]) valueList.ToArray(typeof(string)));
}

public
string PrepareValue(string value)
{
return value.Replace("'", "''").Replace("\r\n", "' + CHAR(13) + CHAR(10) + '").Replace("\n", "' + CHAR(10) + '");
}

public
string GetTableOwner()
{
string owner = SourceTable.Owner;
if (!owner.Equals(string.Empty))
{
return string.Format("[{0}]", owner);
}
return string.Empty;
}

public
string GetHexStringFromBytes(byte[] bytes)
{
if (bytes == null)
{
return string.Empty;
}

int byteCount = bytes.Length;
StringBuilder sb = new StringBuilder(byteCount * 2 + 2);
sb.Append("0x");
for (int i = 0; i < byteCount; i++)
{
sb.Append(bytes[i].ToString("X2"));
}
return sb.ToString();
}

public
bool HasIdentityColumn()
{
foreach(ColumnSchema column in SourceTable.Columns)
{
if (column.ExtendedProperties["CS_IsIdentity"].Value.ToString()=="True")
{
return true;
}
}
return false;
}

<
/script>


Friday, October 06, 2006

Detecting user authentication expiration from an AJAX request

Problem:
How to detect that an AJAX request has been redirected to a login page by the ASP.NET's Forms Authentication?

Background:
Forms Authentication is a standard user authentication is ASP.NET. As a refresher, this authentication scheme utilizes cookie to track whether a user already login. A user requesting a secured page will be redirected to a login page if he/she is not authenticated yet. Once the user is authenticated, the server will redirect to the page that the user originally request.

The Forms Authentication works fine for normal ASPX pages. When the user is idle for a certain amount of time (the default is 30 minutes), the cookie is expired, and the user will be forced to relogin again.

In an AJAX application, the server also behaves the same way. When the cookie is expired, the server will automatically redirect to the login page. However, since the request is made through a XmlHttpRequest object, the browser does not automatically load the login page, instead the content of login page is retrieved by the XmlHttpRequest object.


Solution:
I use a HTTP custom header to differentiate a login page from other pages in the web application. By doing this way, Javascript can easily identify whether an AJAX request has been redirected to a login page (Well... it actually detects whether it receives the login page instead of the expected response).

In the code behind of the login page, add the following code:

// add a custom HTTP header to identify that this is a login page
Response.AppendHeader("IsLoginPage","1");

In the Javascript code, in the function that handles XmlHttpRequest's response, add the code in bold:

xmlHttp.onreadystatechange = function() {
if (xmlHttp.readyState==4) {
if (xmlHttp.status==200 && xmlHttp.responseText!=null) {
if (xmlHttp.getResponseHeader('IsLoginPage')=='1') {
alert('Your session has expired. Please relogin again');

} else if (typeof(responseHandler)=='function') {
responseHandler(xmlHttp.responseText);
}
}
}
};


I decided to simply display an alert so that the user is aware of the situation. My expectation is upon receiving this message, the user will explicitly relogin to the application.


Tuesday, October 03, 2006

Data Encryption in SQL Server 2005

One of our product stores sensitive information in the database. Our client wants the sensitive information to be encrypted so that nobody except the authorized person can access to it.

We want to implement the data encryption at the database level so that it becomes transparent to the applications using the data. The developer should not need to take care about data encryption/decryption and equally important our reporting system which is based on Reporting Service should still works.

SQL Server 2000 does not have built-in encryption capability. Data encryption can be done by using extended stored procedures that utilize external dll. This is definitely cumbersome.

It turns out that data encryption is a native feature in the new SQL Server 2005. So without further ado, I spent some time investigating the data encryption feature.

In SQL Server 2005, data can be encrypted using symmetric keys, asymmetric keys, certificates, or passphrases (plain text), being the last option as the least recommended. We can use a combination of several encryption mechanisms to create a hierarchy of encryption. For example, the data is first encrypted using a symmetric key, then the symmetric key is encrypted using an asymmetric key, and so on to make a stronger encryption.

Encryption using certificates and asymmetric keys are slower but more secure than using symmetric keys. Microsoft recommends using symmetric key to encrypt large amount of data and then secure the symmetric key by asymmetric keys or certificates.

The code following this blog entry is my first attempt to test the encryption in SQL Server 2005. I created a hypothetical Employee table that stores salary (as money data type), credit card number (varchar) and birth date (datetime). Since encrypted data is stored as varbinary, those columns are declared as varbinary instead of their original type.

Following Microsoft's recommendation for large amount of data, I use a symmetric key to encrypt/decrypt the data. The symmetric key is then encrypted by a certificate created internally in SQL Server. Since I don't specify any further encryption mechanism to secure the certificate, by default, the certificate is encrypted by the database master key. In the encryption hierarchy, the database master key is further encrypted by the service master key, and the service master key is secured by DPAPI at the operating system level.

I use EncryptByKey and DecryptByKey functions for the encryption/decryption. These functions only accept varchar, nvarchar, char, nchar, varbinary and binary data type. For other data type like datetime and money, we need to CAST/CONVERT it to varbinary. The symmetric key must also be opened/decrypted before we can use it for encryption/decryption.

In the view, I use DecryptByKeyAutoCert that automatically opens the symmetric key and uses it to decrypt the cipher text.


The code:


-- create database
CREATE DATABASE EncryptionTest
GO
 
-- use database
USE EncryptionTest
GO
 
-- create master key for the new database
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'P@ssw0rd'
GO
 
-- Create table
CREATE TABLE [dbo].[Employee](
       [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
       [Name] [nvarchar](200) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
       [Position] [nvarchar](200) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
       [Salary] [varbinary](256) NOT NULL,
       [CreditCard] [varbinary](256) NULL,
       [BirthDate] [varbinary](256) NULL,
 CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED
(
       [EmployeeID] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
 
-- create certificate to encrypt the symmetric key
CREATE CERTIFICATE EmployeeCert
       WITH SUBJECT = 'Company Certificate',
       START_DATE = '1/1/2006',
       EXPIRY_DATE = '12/31/2006';
GO
 
-- create symmetric key and encrypt it with the certificate
CREATE SYMMETRIC KEY EmployeeKey
WITH ALGORITHM = TRIPLE_DES
ENCRYPTION BY CERTIFICATE EmployeeCert
GO
 
-- create a view to access Employee table
CREATE VIEW [dbo].[vw_Employee]
AS
SELECT  [Name],
              Position,
              CONVERT(MONEY, DecryptByKeyAutoCert(CERT_ID('EmployeeCert'), NULL, Salary)) AS Salary,
              CONVERT(VARCHAR, DecryptByKeyAutoCert(CERT_ID('EmployeeCert'), NULL, CreditCard)) AS CreditCard,
              CONVERT(DATETIME, DecryptByKeyAutoCert(CERT_ID('EmployeeCert'), NULL, BirthDate), 112) AS BirthDate
FROM dbo.Employee
GO
 
-- *** Batch: Encryption
 
-- open symmetric key
OPEN SYMMETRIC KEY EmployeeKey
DECRYPTION BY CERTIFICATE EmployeeCert
 
-- get symmetric key id to be used in the encryption
DECLARE @KeyGUID uniqueidentifier
SET @KeyGUID = KEY_GUID('EmployeeKey')
 
-- insert some records to Employee table
INSERT INTO dbo.Employee([Name], Position, Salary, CreditCard, BirthDate)
SELECT
       'John Smith',
       'CEO',
       EncryptByKey(@KeyGUID, CONVERT(VARBINARY(256), $200000)),
       EncryptByKey(@KeyGUID, '4444-3333-2222-1111'),
       EncryptByKey(@KeyGUID, CONVERT(VARBINARY(256), CONVERT(DATETIME, '19400502', 112)))
UNION
SELECT
       'Garry Baker',
       'General Manager',
       EncryptByKey(@KeyGUID, CONVERT(VARBINARY(256), $150000)),
       EncryptByKey(@KeyGUID, '4444-3333-2211-1144'),
       EncryptByKey(@KeyGUID, CONVERT(VARBINARY(256), CONVERT(DATETIME, '19450108', 112)))
UNION
SELECT
       'Natasha Smith',
       'Account Manager',
       EncryptByKey(@KeyGUID, CONVERT(VARBINARY(256), $120000)),
       EncryptByKey(@KeyGUID, '4444-1111-1111-1111'),
       EncryptByKey(@KeyGUID, CONVERT(VARBINARY(256), CONVERT(DATETIME, '19550501', 112)))
 
-- close key
CLOSE SYMMETRIC KEY EmployeeKey
GO
 
-- *** Batch: Decryption
 
-- open symmetric key
OPEN SYMMETRIC KEY EmployeeKey
DECRYPTION BY CERTIFICATE EmployeeCert
 
-- select Employee
SELECT  [Name],
              Position,
              CONVERT(MONEY, DecryptByKey(Salary)) AS Salary,
              CONVERT(VARCHAR, DecryptByKey(CreditCard)) AS CreditCard,
              CONVERT(DATETIME, DecryptByKey(BirthDate), 112) AS BirthDate
FROM dbo.Employee
 
-- close key
CLOSE SYMMETRIC KEY EmployeeKey
GO

Tuesday, September 05, 2006

Javascript Autocast

A typical day at work requires me to constantly switch between writing code in C# and Javascript. This effects the way I write code in both languages. Since C# is more strict that Javascript, my coding style leans towards C# and sometimes I find myself writing Javascript code in C# style, which can be viewed as good or a bad thing.

Every language has its goodies that if we take advantage of can produce better quality code.

Let's take a common task as example. We want to test a string variable for null and empty.

In C#,I will do like:


string str;
...

if (str != null && !str.Equals(string.Empty)) { }


Applying the same C# style to Javascript will result in the following code:

var str;
...
if (str != null && str != '') { }

The above code is not optimized. In Javascript, we can utilitize 'autocast' feature that will reduce the code into:

var str;
...
if (str) { }

A null or an empty string autocasts into false, and therefore the condition is evaluated as false.

As you can see, the code is much shorter and cleaner. Size does matter in Javascript. Shorter code translates into faster download time.

Autocast also applies to other types/conditions:

  • Undefined variable is evaluated as false. A variable is undefined if it hasn't been assigned a value.
    Example:

    var myVar;
    if (myVar) { } // false


  • Empty string is evaluated as false

    var myStr = '';
    if (myStr) { } // false

    var myStr2 = 'ABC';
    if (myStr2) { } // true

  • Zero is evaluated as false

    var myNum = 0;
    if (myNum) { } // false

    var myNum2 = 1;
    if (myNum2) { } // true

  • Null is evaluated as false

    var myObj = null;
    if (myObj) { } // false

  • Object is evaluated as true

    var myObj = {};
    var myObj2 = new Object();
    if (myObj) { } // true
    if (myObj2) { } // also true


  • Empty array is evaluated as true
    An array is essentially an object, therefore it is evaluated as true.

    var myArray = [];
    var myArray2 = new Array();
    if (myArray) { } // true
    if (myArray2) { } // true

Friday, September 01, 2006

Javascript optimization

The Internet Explorer Team (the team that brings us IE 7) has posted a nice article about Javascript optimization. This is the first part of the scheduled 3 parts article. In this article, the main drive behind the optimization is to reduce the number of symbolic lookups made by the Javascript engine to map the variable name to its real object.

It is quite rare to find such optimization tips from the maker of the browser. I am glad finally IE team pays some attention to improve the Javascript development. We need more articles like this, as more and more Javascript code is written nowdays. In my current project which is using ASP.NET 1.1, about 75% of the UI code is written in Javascript, and the rest is the C# code behind.

Please find the article here

Tuesday, August 29, 2006

Know your end users

On a Saturday morning, I went to a bank to do some over the counter transaction. I was attended by a customer service staff who used a Dell desktop computer with an LCD monitor.

My request takes many forms to be filled and a lot of data entries to be made to the system. I couldn't see the monitor, but I just imagine by looking at how busy the female banker was entering data using the keyboard. Surprisingly, she didn't use the mouse at all and rely entirely on the keyboard and its function keys (F1-F12). The mouse is connected to the computer, but she put it in front of the keyboard so I believe she just want to make some space by getting rid of it.

On another ocassion in a travel agency, I notice a diffent balance between keyboard and mouse usage. I was booking for an airline ticket and the staff attending me used both keyboard and mouse. However, he used the keyboard much more frequently to enter not-so-user-friendly commands on the terminal window and only few times used the mouse to click on the big toolbar located on top of the terminal window. Perhaps the toolbar is used to execute a simple command like 'Print Flight Itinerary'.

Drawing from the two short scenarios above, we can see that different users have different way to use the application. Naturally, the platform of an application defines its limitation, like in a terminal window where everything is text, keyboard is definitely the main input device. However, in most today's applications, desktop-based or web-based, the mouse and the keyboard are both acceptable input devices. But still many people choose to mainly use keyboard alone. They do have valid reasons, most probably because they are so familiar with the keyboard and therefore can operate faster compared to using the mouse.

It is paramount for us, the software developers, to know the behaviour of the end users who will actually use the application we build. Imagine if we develop a cool interactive web application, fully enriched with DHTML popups, animations, and drag and drops, only to realize later after the release that the users prefer to navigate using combinations of keyboard arrows and tabs rather than the mouse.

I highly recommend developers, who spent most of the time behind the stage, to come out from their cubicle and pay a visit to client office. Look at how your end users use the application that you build. I bet you will be surprised and it may change the way you design and develop your application.

Tuesday, August 22, 2006

Visual Studio 2003 SP1 is finally here

Finally, the long overdue, first and possibly the last, and much anticipated Service Pack 1 for Visual Studio 2003 is released. You can download the 156MB package from the download page. The service pack offers no new features, but it fixes many bugs listed in the bug list.

One of the fix that looks promising is no. 832714: "Visual Studio cannot open a Web site if a duplicate Web site exists". This problem often happens when I opened a fresh new web project from the SourceSafe and I manually creates virtual directory for the web project. However, I haven't tested this fix yet.

The installation of SP1 requires Visual Studio 2003 CD 1, so make sure you keep it handy. If your team has several developers, it's more efficient to copy the CD to a shared network drive and point the installer to look at the shared drive.

Saturday, August 05, 2006

Check an HTML element is in view

ajaxMost of the web page that I created contains a form to let the user enters data. Generally, I put a customized ASP.NET's validation summary control to validation errors in the unified way. Since this is a customized control, I also use the control to display the status of an AJAX operation whether successful or not.

Whenever I display a message, I want to immediately grab user's attention. Initially, I call object.scrollIntoView. This is an IE specific method that scrolls the screen so that the corresponding object is in view. As the result, the element is either aligned at the top or bottom of the screen. This is OK, but what happen next is when the object is already in view, the screen jiggles and changes the position so that the object is position on the top. Quite an annoying experience for the user!

So I write this Javascript function that will check whether an object is in view.

The function accepts two parameters: the reference to the object (HTML element) and a boolean value bWhole. If you set bWhole to false, the function will only check the top-left corner of the element. It will return true if the top-left corner of the element is within the viewable area. However, if bWhole is to false, the function checks both top-left and bottom-right corners are within viewable area.


function isInView(o,bWhole) {
if(typeof(o)=='undefined' !o) return false;
if
(typeof(bWhole)=='undefined') bWhole=false;
var
x1=o.offsetLeft;
var
y1=o.offsetTop;
var
p=o;
while
(p.offsetParent) {
p
=p.offsetParent;
x1+=p.offsetLeft;
y1+=p.offsetTop;
}
var x2=x1+o.offsetWidth-1;
var
y2=y1+o.offsetHeight-1;
var
left=document.body.scrollLeft;
var
right=left+document.body.clientWidth-1;
var
top=document.body.scrollTop;
var
bottom=top+document.body.offsetHeight-1;
return
(bWhole)? (x1>=left && x2<=right && y1>=top && y2<=bottom) : (x1>=left && x1<=right && y1>=top && y1<=bottom) (x2>=left && x2<=right && y2>=top && y2<=bottom);
}

Monday, July 31, 2006

UI Design Patterns: Hierarchical Master Detail

Problem Summary

The user needs to traverse through a hierarchical or tree-like structured data and do CRUD (Create, Read, Update, and Delete) operations.


Screenshot (click to enlarge)

Use When

  • The data is structured in a hierarchical manner. Example: organization structure, web pages.
  • The amount of fields/columns in the data is sufficient enough to be displayed as a form in one screen-length.

Don't Use When

  • The data is flat
  • The data consists of very few fields, e.g. only key and value, or consists of many fields that span across several screen length.

Solution

  • Divide the screen into two columns.
  • The left column contains a tree view control to let the user navigates through the hierarchical data. On top of the tree view control is a toolbar (or collection of buttons) to work on the tree view. In the screenshot, it has "Add Child" to add a child node, "Add Root" to add root node, and "Remove" to remove a node.
  • The right column contains a form that the user can use to add new data and update existing data
  • The form on the right will only appear when there is selected node on the left.

Suggested Improvement

  • Drag and drop among nodes in the tree view. Drag and drop is a sophisticated operation that moves a branch of hierarchical nodes from one parent to another parent.
  • Clone node feature for faster creation of new data. Instead of always starting with blank form, the user is aided with a copy of data from another node.
  • Load on demand treeview to handle large hierarchical data.
  • Banding to improve the performance when there are too many children under one parent node
  • Hierarchical delete. When the parent node is removed, all its direct and indirect children nodes are also deleted. If this is not possible, then the user should only be able to delete from the bottom-up.

Sunday, July 30, 2006

Enterprise UI Design Patterns anyone?

As mentioned in my previous post, I have been closely following the UI design patterns published on the Internet. These patterns are really problem solver, however, I feel the majority of the patterns do not apply to the type of web application I am building.

In my current company and previous ones, I build web application for business use. This type of application have their own distinct characteristics and problems:
  • Manipulates a lot of data in form layout
  • Majority of the operations are CRUD (Create Read Update Delete)
  • Data validations
  • Hierarchical and flat data structure
  • Deals with master-detail relationship
  • etc.

I have been identifying some UI patterns that my team and I often use in the web application to tackle the same repeating problems. In the next posts, I will start documenting those UI patterns so that my reader can benefit from it. Feel free to comment on my patterns as I always need to continuously improve them.

Tuesday, July 25, 2006

UI Design Pattern Galore

Nowadays, there has been a growing number of web sites that collect UI design patterns. The UI design patterns are common solution to recurring problems when designing user interfaces. Whenever I face a usability issue or make a judgement on screen design, I always recall to the collection of UI design patterns. Therefore, it is so useful to bookmark web sites that collects design patterns. My top three sites on the list are:

Yahoo! Design Pattern Library
It has a lot of interesting (read: advanced) patterns like drag and drop, animations, etc. It is good to discover what today's web applications can do. When you are ready to put the patterns into practice, try the companion Yahoo! UI library.

Designing Interfaces
I read the book with the same title first before discovering this site. This is the book/site to discover more UI patterns beyond web page.

Patterns in Interaction Design
Plenty of patterns and tons of screenshots make this site a good reference.

Wednesday, July 19, 2006

Microsoft hasn't abandon us!

Better late than never. According to this Microsoft blog, Microsoft will release Service Pack 1 for VS.NET 2003. The beta has been out for some time and finally they decide to release it on 15 Aug 2006, if no futher delay introduced.

This is certainly good news for me, who is still using VS.NET 2003. I have been struggling in the past few weeks because of the infamous "Unexpected error creating debug information file ... The process cannot access the file because it is being used by another process". In my case, aspnet_wp.exe process is locking the PDB file, so everytime I want to build the solution I need to kill the process manually. Furthermore, whether this is related or not, my VS.NET debugger is not working properly. When I mouseover a variable, the value does not come correctly. The value will appear much later after I traverse down several line of codes, so the QuickWatch and Watch become useless. This problem is not yet solved until now although I already took drastic measure to uninstall and install VS.NET.

Finger crossed VS.NET 2003 SP1 will fix those problems, otherwise I have to live with it until the next upgrade to VS.NET 2005 :(

Thursday, July 13, 2006

Web server and database server time difference

Time difference between web server and database server can cause hard-to-find bugs. This is what I experienced recently after deploying a web application to a live server. Unlike our development server where IIS and SQL Server reside in one machine, in the live server we have a SQL Server sitting in one machine and IIS sitting in another machine.

For some reasons, time syncronization in both servers (Windows 2003) did not work and as a result there was significant time difference between the two servers.
Because of the time difference, application features that depend on comparison between current time and stored datetime value will not work properly. For example, when I save User's password expiry date, I call DateTime.Now from the application (using the web server's time) and save the value to the database. In the stored procedure, I check if a password is expired using statement like:

-- check if password is expired
IF @PwdExpDate <= GETDATE()
    
-- password is expired
    
ELSE
    
-- password is still valid



Since Password Expiry Date is set in the web server and then compared in the database server, this comparison does not work properly due to the time difference.

Although fixing time difference between the two servers is easy, I started to think whether we can always safely assume that both the web server and database server has the same time. Or do we need to introduce a programming guideline here:

"Datetime that is set in one machine can only be safely compared to the current time from the same machine"

In my case, the above guideline implies to moving the comparison logic from stored procedure to the application layer, or the other way around, setting the expiry date at the stored procedure and perform the comparison in the stored procedure as well.

Friday, June 30, 2006

Ordering Category in CodeSmith

When I worked on a CodeSmith template (*.cst), I came into a small problem to order categories in a property grid. The template I created has several categories and I want it to be displayed in a specific order, not ordered alphabetically as per default.

I came across this discussion that gave me a nice trick to order the categories. We need to prefix the category name with a special character that does not get displayed by Property Grid. Aab character (\t) will do the trick. Multiple tab characters might also be used. So for example if I want to order my categories in the following order:

Context
Persister
BusinessEntity
UnitTests

Then in CodeSmith template I need to write like:

<%@ Property Name="SourceTable" Type="SchemaExplorer.TableSchema" Category="\t\t\tContext" %>
<%@ Property Name="CreateBusinessEntity" Type="System.Boolean" Category="\t\tBusinessEntity" Default="True" %>
<%@ Property Name="CreatePersister" Type="System.Boolean" Category="\tPersister" Default="True" %>
<%@ Property Name="CreatePersisterUnitTest" Type="System.Boolean" Category="UnitTest" Default="True" %>

Thursday, June 15, 2006

Ajax and Auto Save

The following article summaries our experience in working with auto save feature in the web application. New technology and new features always bring fresh challenges to developers and also good things for users, but they may also raise new issues that have never come up before. I hope by sharing this experience with you, you will be more aware with the issues while working on similar feature.


One of the recent challenge that my team had was to handle session timeout issue when our users spend too much time on a web form. As a background, a user session will time out when there is no communication (client request) with the server for a certain period. When time out happens, the user has to log in again and this action potentially destroys any unsaved changes the user has made on the web form.


We don't favor to increase the session time out, since this solution imposes a greater risk to our application. What we need is a feature or two that can handle session time out gracefully.


We come up with the idea of auto save after experiencing GMail for a while. The auto save feature has been in Microsoft Word as long as I can remember, so we often take it for granted, but it is a new and sexy feature for the web application.


In case you haven't seen how auto save works in GMail, this feature runs silently in the background. It will detect changes in the email content and periodically send the content to the server for saving. A short, non obstructive message will appear to indicate that the auto save is done.


So we made our mind to implement auto save in our web form. The web form is much more complex that Gmail's form. In the form, we have about 25 fields (textbox, dropdownlist, textarea) and a rich text box that can contains HTML.


Briefly this is what we did:


  1. A timer in set in Javascript that will invoke a function called saveData() every n-minutes.

  2. saveData() calls a home-grown Javascript form utility to extract the values of all fields into XML.

  3. We use AJAX to send the XML to the server.

  4. The server receives the XML and based on the status of the data does either one of the following:

    1. If the user does not explicitly save the data, we save the XML into a table that is created for the sole purpose of saving data temporarily.

    2. If the user has explicitly save the data before (for example, by clicking on the save button), we deserialize the XML into business object, and use the data access layer (DAL) class to save the data into proper tables.


  5. Upon completing the operation, the server returns a return status and the javascript displays the auto save message.


In my observation, it is important that the auto save message does not distract from the user's focus. We don't want the user to lose focus on the fields he/she is working only for the sake of auto save. The auto save feature should be made as transparent as possible.


Point 1-5 above is what we did initially with the auto save. When we used this feature for a while, we noticed that the audit log was filling up fast. This is because we always save to the database regardless whether the data has been changed.


We have two options to solve the problem. First option is to compare the values of the fields in the client side and send the XML only if we find differences. The second option is to compare the values in the server side. While the second option requires less effort, we find that this solution has potential issues. As the value comparison is done in the server side, the client needs to constantly send the XML data to the server side. This will increase the traffic and make the server busy unnecessarily. It will also make the user session never expires, opening the application for further exploitation. The first option is indeed more challenging but it will create a more efficient traffic and the user session still works properly.


Let's see the beauty of auto save in the following mini case study:


A user logs in to the application, enters data in the form, but he hasn't press the save button. After n-minutes, the auto save triggers and saves the data to the temporary table. Suddenly, there is a network disruption and the user loses his session.


After the network disruption is over, the user logs in to the application again and revisit the same form. This time the application checks the temporary table and finds the auto save data that belongs to the user. The application pops up a message, informing the user whether he/she want to recover the data or continue with the blank form. If the user chooses to recover the data, the auto save data is loaded and populated to the form. However, if he chooses to start with blank form, the auto save data is deleted.


Giving options to recover data or to start with a blank form is a nice to have feature, because in some cases the lost data is not worth to recover.


Does auto save solve session timeout issues? I can say partially. The session will still time out if the user does nothing, but with auto save the user will not lose all data. I have been thinking of another feature that can warn the user when the session is about to time out. This feature will complement auto save to make a really user friend form.

Friday, May 26, 2006

Sharing Visio Diagram

In my current company, I create a lot of database and class diagram using Visio for Enterprise Architects (the one that comes with VS.NET 2003 Enterprise Architect) and share it to other people. The fellow developers who have VS.NET 2003 Professional always have problem viewing the file, because they don't have Visio installed in their machine.

Installing Visio Viewer 2003 won't help much. You can open Visio file from inside Internet Explorer, but the result is often unpredictable: contents of the entity diagram tends to overflow the table boundary, the line thickness is not correct, etc. Basically, I am not satisfied. The only solution is to print on paper and distribute, until now.

I just discovered (doh!) that we can publish Visio diagram as web pages. The result is not much different as the original diagram viewed inside Visio. I do notice some minor defects like dashed line is converted to solid line. But so far the defects are insignificant.



In case you haven't discovered, to export Visio diagram to web page, just choose File - Save As Web Page.

Inside the dialog box, you can set some parameters. In my observation, some options don't really affect the performance. I can't make the Custom Properties working as well, so I opt out this option to remove the empty space reserved for this feature.

Visio will create a lot of html and vml file and pack them into a folder. I move this folder to our development web server and instantly everyone in the team can access the Visio diagram. The diagram is drawn using VML (Vector Markup Language) so it still looks nice even when we zoom in/out.

Sunday, March 05, 2006

Managing ASHX files

In the web application that I am working on, a web page is composed of several user controls. The composition happens in the runtime and is driven by metadata stored in a database. As reconstructing a web page during postback is quite expensive, I opted to out-of-band AJAX requests for all our asyncronous requests.

Consequently, there is a growing number of ASPX created solely to handle AJAX requests in our web project. Initially, I created a folder called "Handlers", containing all ASPXs that handle out-of-band AJAX request, separating from the normal ASPX files. I also suffix the file name with Handler, like 'LookupHandler.aspx' to further separate between ASPX handler and normal ASPX.

I am aware about ASHX as an option to handle out-of-band AJAX request instead of using ASPX. A lot of people say that ASHX is simpler, therefore it should run faster. I need to see a benchmark to support this though.

Although ASHX seems to give some performance benefit over ASPX, I was initially discouraged to use this in our projects or even recommend this approach to my fellow developers. ASHX is not supported by VS.NET 2003. There is no file template to begin with, and the worst is there is no Intellisense support. This thought revolved around me until after some time I got spare time to do more experiment with ASHX.

Not to my surprise, ASHX supports code behind. But the code behind just does not work as seamlessly as ASPX. I can't make the code behind file a child of the ASHX file in the web projects like:


ContactHandler.ashx
|
+- ContactHandler.ashx.cs


This does not work.

What I can do is to structure the ASHX and code behind at the same level. Not as nice as the way ASPX is structured, but still acceptable.

ContactHandler.ashx
ContactHandler.ashx.cs

To have the ASHX working properly with code behind, we have to add class attribute in the declaration:

ContactHandler.ashx only contains 1 line:


<%@ Webhandler language="c#" class="JLTi.iClaims.Handler.LookupHandler" %>




ContactHandler.ashx.cs contains the actual code to handle out-of-band AJAX request:

using System;
using
System.Web;

namespace
Experiment
{
public class ContactHandler : System.Web.IHttpHandler
{
#region IHttpHandler Members

public void ProcessRequest(HttpContext context)
{
context.Response.Write(
"Hello World from ASHX");
context.Response.End();
}

public bool IsReusable
{
get
{
return true;
}
}

#endregion
}
}



A better way to manage ASHX is to put the code behind file in another project (a class library) like:


Experiment.Handler
|
+-- ContactHandler.cs

Experiment.Web
|
+-- Handlers
|
+-- ContactHandler.ashx


In the above example, the ASHX file is put in the folder 'Handlers' inside the web project and the code behind file is put in a separate class library project. By structuring this way, we can easily version, share, and reuse the code in the code behind among several solutions.

Saturday, February 25, 2006

Why I use out-of-band AJAX requests

In current project, I use a lot of out-of-band AJAX requests. In an out-of-band request, individual request does not flow into its own standard ASP.NET page life-cycle but instead calls another page and follows the flow of the corresponding page.

There has been a growing debate regarding the pros and cons of out-of-band request. In the cons side, the out-of-band request breaks ASP.NET model. Developers do not code in the usual manner they are used to do for years. Instead they have to create another page to server AJAX request. In my company, we call this page as 'handlers' or 'AJAX handlers', or simply 'AJAX servers' to less-technical people :) Whatever the name is, programming this handler is usually more raw and messy since we have to let go some nice ASP.NET features like ViewState that makes web programming easier and more intuitive (more like an event-driven programming).

In the pros side, the out-of-band request is more efficient since it only carries data that the handler need. It does not need to carry hefty payload from ViewState. The server side processing is also more efficient since it does not to reconstruct the state of the whole control hierarchy. Moreover, the handler also promote reusability and clear separation of responsibility, since a handler's only responsibility is to provide correct response based on received request. It does not need to know how the UI is rendered. Thus a handler can be used by several UIs.

So which kind of request to choose? It really depends on how you structure the content of the page. In a common ASP.NET project where one screen in the specification translates into one ASPX page, then you can safely avoid out-of-band request. Microsoft ATLAS does this. The programming model does not change dramatically.

However, the moment you want to promote reusability, you will start using ASCX (user controls) inside the ASPX and later, to promote even further reusability, use custom controls. In this case, the out-of-band request is a better option (and perhaps the only way to implement AJAX request). Consider that it is too expensive to reconstruct the whole page (and reinstantiate all user controls/custom controls in the page) only to serve a single AJAX request.

Thursday, February 23, 2006

Visual Studio 2005 Licensing

This afternoon, we went to Microsoft office to have a discussion with Rashish Pandey, Product Marketing Manager (Developer Tools) for Microsoft Singapore. The agenda of the meeting is to discuss various options to buy Visual Studio 2005 for our plan to migrate to Visual Studio 2005.

Noted below are what I extracted from the discussion plus a few hours of surfing Microsoft site to seek further clarification. Bear in mind that those notes are my personal opinion and should not be taken as it is. If you are in the position of evaluating VS.NET 2005 licensing as well, I suggest to start from Microsoft site then seek more explanation from Microsoft representative.


To begin with, Visual Studio 2005 licensing scheme is per user/developer basis, not per installation basis. Simply speaking, if there are 20 developers, then we need 20 licenses to adequately cover all the usage, regardless on how many instances of Visual Studio is installed.

VS.NET 2005 comes in 4 editions: Express, Standard, Professional, and Team System. Visit Visual Studio 2005 Product Feature Comparisons for more comprehensive comparison among those editions. In my opinion, the Express and Standard editions are more suitable for hobbyist or individual developer working at home rather than for enterprise use. Both editions come bundled with SQL Express Edition, so we know how Microsoft positions these editions. Surprisingly, both Express and Standard editions support SQL Reporting Service, so theoretically they can be used to create and publish reports to SQL Reporting Service.

Beginning from the Professional Edition and up, there are SQL Server 2005 integration and XML/XSLT editor, two major features which are missing in the first two editions. Microsoft positions the Professional edition for individual developers, but I believe in practice due to overwhelming features and high price tag of the Team System edition, most companies will stick to the Professional edition.

In the high end side of the product line is Visual Studio Team System, which consists of 4 products offered in 5 different packaging. The products are Architect, Developer and Team Tester, each one targets specific role in the software development lifecycle, plus the Team Foundation Server to enable collaboration among those roles. You can buy the Team System Suite which consists of 3 Team System, one copy for each role, bundled together. The Team Foundation Server is sold separately and will be available in March 2006.

Users connecting to the Team Foundation Server needs a license known as CAL (Client Access License). Every individual Visual Studio Team System edition comes with 1 CAL, which means the user automatically has license to access the Team Foundation Server. The Professional edition does not include CAL, so you need to buy a CAL if you want the developer using the Professional Edition to use the Team Foundation Server.

Microsoft has a scheme called Software Assurance, which entitles you for free upgrade to the next version of the product as long as you has a valid subscription for the corresponding product. In VS.NET 2005, MSDN Subscription is a superset of Software Assurance. Other than offering free upgrade to the future release of Visual Studio, it also gives you phone-based support, newsgroup support, and a bundle of Microsoft operating systems, server products, betas, etc. licensed for development and testing only (Developer Edition). In my view, this is the most important benefit of MSDN Subscription. Developers can try various Microsoft products and run their applications in various environments without needing to buy licenses.

The Team System edition with MSDN Subscription bundle comes with a 5-user-limited edition of the Team Foundation Server called 'Workgroup Foundation Server'. This product is functionally equivalent to the Team Foundation Server, but it is limited for 5 users only. IMPORTANT NOTE: You cannot buy extra CALs to increase beyond 5 users.

Finally, there is a downgrade licensing scheme available for VS.NET. It means you can buy VS.NET 2005 to license your VS.NET 2003 installation. It sounds uncommon, but it might be useful in the situation where you still have projects in VS.NET 2003, not quite ready to jump to VS.NET 2005, and need more licenses to cover additional developers.