Posts Tagged ‘c#’

iKnode: What is in a name?

// November 18th, 2011 // Comments Off on iKnode: What is in a name? // .net, Algorithms, Apple, Architecture, asp.net, cloud computing, iknode

I was recently talking to a friend, and he asked me why I named our project iKnode. He said: “I know you are a fan of Steve Jobs but this is too much”. To be honest, it has nothing to do with Apple or Steve Jobs.

The story begins in 2003 when I was doing my dissertation for my master. I was working for a model to mine data from Data-warehouses to create a knowledge base using Frames and Protegé. I created an engine to do the transformation and mining, at the time I called it IKnow. It was a single engine that only analyzed, and extracted the data to create Ontology classes.

After a couple of years I got interested in distributed systems, and I made the engine run in a distributed fashion and interact with other engines. They could learn to do things, teach others and perform tasks. I figured that code is knowledge. If I wanted the engines to be able to learn and talk to each other they would have to have a common language. I decided to use C# for that. Now I had multiple nodes running on different machines, and I decided to call each node an IKnode. The product as a whole was still called IKnow.

After talking to a friend back in 2006, he mentioned that the name IKnode sounded more interesting than IKnow.  After some consideration I changed the name and the name space of the code. I even bought the  domain. In this same talk, it also came out that for IKnode to be useful you needed a whole team of nodes to perform tasks. And I mentioned to him, jokingly, “There is no I in IKnode”. And then I thought: “But there is an I in IKnode”. And that struck a chord. I thought: “the I is not important, so let’s make it a lower case I”. And that is how the name came to be iKnode. The only letter that is upper case in the name is the K. Which is the whole purpose of the project: Knowledge.

It is interesting to remember how far iKnode has traveled, and how it has grown. I feel like a proud father right now. :D

Easy .Net Transaction Management with Transaction Scope

// April 3rd, 2009 // Comments Off on Easy .Net Transaction Management with Transaction Scope // .net, ado.net, software development, SQL Server

Transactions are a common technique to ensure consistency of the data when using database applicationsfor example with Sql Server. The System.Transactions namespace in the .Net framework simplifies Transaction Managements considerably.

This time we are going to talk about TransactionScope, which is part of the System.Transactions assembly. Using TransactionScope to manage transactions is fairly simple, yet powerful. Let’s look at an simple example using ADO:

TransactionScope tranScope = new TransactionScope();

SqlConnection connection = new SqlConnection(connectionSettigs);
SqlCommand cmd = new SqlCommand(query, connection);

connection.Open();
cmd.ExecuteNonQuery();
connection.Close();

transScope.Complete();

In the example we first initialized the TransactionScope object, then we initialized the connection (very important point, we will talk about it later), execute the command, etc. And then at the end we called the method Complete for the transaction scope object.

When the transaction scope object is initialized, the transaction is effectively created and all commands after that will be protected by the transaction. After the commands are executed, then we decide to either commit the transaction or rollback. Now, for the transaction scope, the method Complete effectively commits the transaction to the database, when Dispose executes the rollback.

For some developers, is quite confusing since most developers are used to:

// Handling the transactions explicitly.
SqlConnection connection = new SqlConnection(connectionSettigs);
SqlTransaction trans = connection.BeginTransaction();

SqlCommand cmd = new SqlCommand(query, connection);
cmd.Transaction = trans;

connection.Open();
cmd.ExecuteNonQuery();
connection.Close();

connection.Commit();

Not only is the TransactionScope idiom different but also the Commit and Rollback method names are also different. The reason why the names are also different is because 1) The TransactionScope object is not a transaction per se, it is more of a transaction handler and 2) Because TransactionScope follows the Dispose pattern (just like SqlConnection). Following the Dispose pattern to implement the TransactionScope makes it easy to do this:

// Handling the transactions implicitly.
using(TransactionScope tranScope = new TransactionScope()) {

	SqlConnection connection = new SqlConnection(connectionSettigs);
	SqlCommand cmd = new SqlCommand(query, connection);

	connection.Open();
	int id = cmd.ExecuteScalar();
	connection.Close();

	// If success commit
	if(id > 0) {
		transScope.Complete();
	}
}

The example displayed above shows how to handle implicit transactions using the ‘using‘ keyword. When applying the ‘using‘ idiom on an object which implements the Dispose pattern, the ‘Dispose‘ method of such object will be called when the using block ends.

What the transaction scope is doing is registering the transaction in the connection contained in the transaction scope block. Making it easy to handle transactions in .Net code.

In the case of the example, if the transaction scope is not committed before the block ends, it is effectively rolled back automatically, since the using idiom calls the Dispose method of the selected object. This makes the use of Transactions extremely easy to read and maintain.

Now what happens if there is an exception inside of the TransactionScope block ? Well, the object’s Dispose method is called, which automatically rolls back the transaction.

Now it is very important, while using TransactionScope, that the connection is opened inside of the transaction scope block, otherwise the transaction won’t be registered in the connection.

SqlConnection connection = new SqlConnection(connectionSettigs);

// This will not work. The transaction will not be registered in the connection,
// and it will deadlock your application.
connection.Open();

// Handling the transactions implicitly.
using(TransactionScope tranScope = new TransactionScope()) {

	SqlCommand cmd = new SqlCommand(query, connection);

	int id = cmd.ExecuteScalar();

	// If success commit
	if(id > 0) {
		transScope.Complete();
	}
}

connection.Close();

The above example will not work. What happens is that when the Connection is opened inside the TransactionScope block, the transaction is registered in the connection which means that any command for that connection will be in the transaction until the connection is closed.

Be very careful to initialize the connection inside the transaction scope block, otherwise the transaction won’t register itself in the connection, and your data won’t be protected by the transaction. In some cases, any command inside of the transaction scope block, for which the connection is opened before the transaction block will result in a dead lock. The code above serves as an example of this case.

More information:
MSDN:Transaction Scope Class.

MSDN:Implementing an Implicit Transaction using Transaction Scope

Unit Testing Data-Driven WCF Services

// March 27th, 2009 // Comments Off on Unit Testing Data-Driven WCF Services // .net, design patterns, services, Unit Tests, wcf

Any data driven module, class or method needs to be tested. At this moment anything that is data-drive (which would actually cover almost anything) needs to be verified according to specs to ensure correctness. In this post we will talk about creating tests for Data-Driven WCF Services. For simplicity and ease of use, we are going to use the Enterprise Library Data Application Block to talk to the database.

Before we get deeper into the topic at hand, let’s talk about the unit test environment that I use and recommend for testing data-driven applications.

Testing Context
The basic objective of a unit test is to verify correctness of a an application. Testing every unit in a layer ensures the correctness of the behavior of such units, specially when being used by other units in the application. This way all units can be reused with confidence.

Now in order to maintain confidence in the Unit being tested, we need to ensure that the tests are reliable and that providence confidence in the correctness of the unit under test. When a test sometimes pass and sometimes fails is called an Erratic Test, and in order to maintain confidence we need to avoid such behavior.

Unit Tests, as explained before, focus on testing the a single unit in a system, unlike integration testing. So it is a best practice to have one database instance per developer or tester. Having two developers running the test fixtures at the same time on the same database create an erratic test behavior, which is not acceptable.

Data-Driven Tests
The pattern we are going to follow in this post is a simple form of the Data-Driven Test Pattern, which is described as:

“We store all the information needed for each test in a data file and write an interpreter that reads the file and executes the tests”.

This pattern is mostly used to avoid code duplication, especially when several test cases just change on different data conditions. This way one test can be executed with several test scripts, instead of writing one test for every data condition.

We’ll use a simple form of the pattern, using Sql syntax for scripts instead of Xml data files. I find it easier to just create an Sql script, execute it and continue with the testing.

Each test must have a context setup. There are several patterns that specify that one test method sets up the data context for the others; for example, one test method tests the insertion of data, while the next test method tests the retrieval of the inserted data. I don’t recommend this setting, this leads to Erratic Tests. Every unit test must be independent of each other. This means that every test method must setup its own environment and it must also clean it up to the initial state.

Unit Testing WCF Services
Unit Testing WCF Services is extremely easy. There are two ways to do this:

1) Include the WCF Library as a reference.

2) Load up the WCF Service host and call the service from the test fixture.

Since we are mostly doing unit testing and not integration testing, loading the WCF Library as a reference works fine. For integration purposes I would write one test fixture for each endpoint in the configuration, but this is out of the scope of this post.

Test Code

So we have two service methods: 1) ProjectService.Insert and 2) ProjectService.GetIdByName. This is the interface of such methods:

namespace WCFTesting
{
    using System.ServiceModel;

    /// <summary>
    /// Project Service Contract.
    /// </summary>
    [ServiceContract]
    public interface IProjectService
    {
        /// <summary>
        /// Inserts a new Project.
        /// </summary>
        /// <param name="id">Project Identifier.</param>
        /// <param name="name">Project Name</param>
        /// <param name="description">Project Description</param>
        /// <returns>True if successful, false otherwise.</returns>
        [OperationContract]
        bool Insert(int id, string name, string description);

        /// <summary>
        /// Returns the id of the project that goes by the selected name.
        /// </summary>
        /// <param name="name">Project Name</param>
        /// <returns><c>int</c></returns>
        [OperationContract]
        int GetIdByName(string name);
    }
}

The service method implementation is shown below:

/// <summary>
/// Inserts a new Project in database.
/// </summary>		
/// <param name="id">Project Identifier.</param>
/// <param name="name">Project Name</param>
/// <param name="description">Project Description</param>
/// <returns>True if successful, false otherwise.</returns>
public bool Insert(int id, string name, string description)
{
	Database db = DatabaseFactory.CreateDatabase();

	DbCommand command = db.GetStoredProcCommand("[dbo].[InsertProject]");
	db.AddInParameter(command, "id", DbType.Int32, id);
	db.AddInParameter(command, "name", DbType.String, name);
	db.AddInParameter(command, "description", DbType.String, description);
	db.AddOutParameter(command, "success", DbType.Boolean, 1);

	db.ExecuteNonQuery(command);

	bool success = (bool) db.GetParameterValue(command, "success");

	return success;
}

/// <summary>
/// Returns the id of the project that goes by the selected name.
/// </summary>
/// <param name="name">Project Name</param>
/// <returns><c>int</c></returns>
public int GetIdByName(string name)
{
	Database db = DatabaseFactory.CreateDatabase();

	DbCommand command = db.GetStoredProcCommand("[dbo].[GetProjectIdByProjectName]");
	db.AddInParameter(command, "Name", DbType.String, name);

	int id = -1;
	using(IDataReader reader = db.ExecuteReader(command)) {
		if(reader.Read()) {
			id = reader.GetInt32(0);
		}
	}

	return id;
}

Note: All code can be found in the attachment at the end of the post.

As you can see the methods are really simple. We are using a simple call to a stored procedure, getting the results and sending them back to the client.

Now let’s look at the actual unit test fixture:

/// <summary>
/// Test the Service method Insert
/// </summary>
[TestMethod]
public void Insert()
{
	using(TransactionScope tranScope = new TransactionScope()) {
		int id = 1;
		string name = "Project 1";
		string description = "Description of Project 1";
		Assert.IsTrue(Service.Insert(id, name, description));
		Assert.IsTrue(ProjectExists(name, description));

		id = 2;
		name = "Project 2";
		description = "Description of Project 2";
		Assert.IsTrue(Service.Insert(id, name, description));
		Assert.IsTrue(ProjectExists(name, description));
	}
}

/// <summary>
/// Test the Service method GetIdByName
/// </summary>
[TestMethod]
public void GetIdByName()
{
	using(TransactionScope tranScope = new TransactionScope()) {
		// Insert the Test Data.
		SqlHelper.ExecuteSqlScript(Db, @"......UnitTestsScriptsInsertTestProjects.sql");

		// Test Service Method.
		int id = -1;
		string name = "Kook";
		id = Service.GetIdByName(name);
		Assert.AreEqual<int>(1, id);

		name = "Samii";
		id = Service.GetIdByName(name);
		Assert.AreEqual<int>(2, id);

		name = "CPW";
		id = Service.GetIdByName(name);
		Assert.AreEqual<int>(3, id);
	}
}

First we have the insert method, which doesn’t require any existing data, so no script is executed before the service method. We do however wrap everything into a TransactionScope, which ensures that whatever the test does, after the TransactionScope block is out of the scope it will rollback all the changes. I will leave the discussion on Transactions for a later post.

Now, the second method is more interesting. The GetIdByName test method actually needs a setup. We could have created a dependency on the Insert method so that the first test method prepares the data that the GetIdByName method will require to test. But we want to avoid Erratic Test behavior, so we created them completely independent.

The GetIdByName is also wrapped in the TransactionScope so that we can ensure that after the test is done, the data is rolled back to its initial state.

The script (InsertTestProjects.sql) contains three sample projects which the service method will retrieve and we will verify the returned data.

As you can see testing Data-Driven Services is very simple. If you pay close attention to the project you will notice that this way of doing this doesn’t only apply to WCF Services, any .Net assembly can be tested this way.

Hope this helps you on your quest for simplicity and correctness!

Download Test Solution

Test Solution Requirements

  • Visual Studio 2008
  • Enterprise Library 4.x
  • .Net 3.5
  • SQL Server 2005

The Database folder contains the table and stored procedures needed by the code. Just modify the ExecuteScripts.bat to set the database authentication info. Also modify the App.config in the UnitTests project in order to run accordingly.

(Disclaimer: This project has been stripped down so that it is easy enough to understand. This second method, iin order to follow Data-Driven Test, should get the expected data from an Xml file, insert it into the database, and compare accordingly. In this sample we have the data hard coded for simplicity. Building an interpreter is our of the scope of this post.)

* References:
– Meszaros, Gerard. “xUnit Test Patterns: Refactoring Test Code”
2007, Pearson Education.
Amazon

Is NStatic dead?

// October 16th, 2008 // 2 Comments » // .net, Tools

It has been almost three years since the initial announcement of NStatic by Wesner Moise and still no release, not even a beta release. I wonder if the project is still alive or just in the backlog.  Wesner has not commented at all about NStatic release in his blog, nor he writes in his blog as regularly as he used to.

This is very sad, due to the excitement that I had when I learned about NStatic, and it lasted a whole year, learning about how the tool works. That excitement has just gone out of the door. NStatic is vaporware for me now, since I haven’t really seen anything, and I understand that software takes a long time to build, specially this complex, but still after almost three years, we have seen nothing.

In my personal opinion, I think talking about the tool so much is just going to affect negatively, especially if it is going to be a commercial tool.

Well this is just my opinion, I really feel disappointed because I was expecting it very eagerly. Now I don’t really expect it at all. There are other tools in the market that fill the gap right now.  They might not have the features that NStatic presumes it has, but it exists right now.

In my case, I have already adopted a tool for static analysis, and personally, even if NStatic was coming out tomorrow, it would take a long time for me to actually decide on adopting it. I don’t think the features that it has over the one we are using is actually going to be enough for us to change our process and re adapt. NStatic would have to be extremely innovative in its features and extremely correct in its findings in order for me to consider it as a cost effective solution for my projects.

I have to say this rant is not against Wesner at all or the NStatic project. The truth is I have never met him. But what I feel is very dissapointed, because too much time has passed and I don’t think I will see the product that Wesner proposes in a long time.

Emacs Sharing!!

// August 26th, 2008 // 4 Comments » // .net, emacs

I have decided to share my .emacs and csharp snippets (yasnippets) with the community. I have been inspired by Dino Chiesa to post my .emacs. I have expanded this emacs profile since 2001 and I think it has gotten to a point were it is clean and useful. There is still some things I need to remove, since I just began creating snippets for the yasnippets script, I have to remove my custom (sucky) code generation.

I also semi-ported Tomas Restrepo’s molokai color scheme for vim (which he ported from textmate).

Here is my .emacs.

Here is my csharp-snippets.

Here are the color schemes.

Here is a compiled version of all my elisp including the snippets.

Enjoy!