Saturday, March 27, 2010

DataWings.IO

    According to the unit test purists, your code should never take a dependency on the System.IO namespace directly. In this way you can stub out the entire file system in your unit tests. Yes, I see the point, and I even agree, but unfortunatly I haven´t actually followed through and done this in my own code. Pure laziness.

    But now I´m working on a new feature in DataWings (no details yet, I´m planning a "Steve Jobs presents the iPad"-like big splash at some time in the future) where one of the main features has to do with manipulating stuff in the file system. So, DataWings now includes a very small assembly called DataWings.IO which at the moment contains just one single static class wrapping the functionality in the System.IO namespace. The entire class definition is listed at the bottom of this post.

    I´m going 100% YAGNI here, and currently only the methods of the System.IO that my exiting new feature needs have been exposed. These are:

  • File.Exists()
  • File.ReadAllText()
  • File.Delete()
  • File.WriteAllText()
  • File.ReadAllLines()
  • Path.Combine()
  • Directory.GetDirectories()
  • Directory.GetCurrentDirectory()

    An interesting note: I´ve implemented everything as extension methods so that you can say things like: string contents = @"c:\myfile.txt".GetAllContents() . I know that some of you cool cats frown at the notion of extensions methods, but I generally think it´s a thing of grace and beauty. Probably my Smalltalk background showing through.

    Another interesting thing: I´ve employed the power of Func<> and Action<> in the code, and IMHO the result is extremly pleasing to the eye: very clean, very easy to understand, very easy to stub the behavior in tests.

    So, as mentioned, the whole point is to make the code testable, and here´s a sample of the kinds of tests you can write if your file system accessing code uses the DataWings.IO functionality instead of System.IO directly.

    Imagine that I've written a class MyCustomBehavior with a method DoSomeStuff(), and an expected side-effect of this method is that a file with known name and known contents is written to the file system. This code takes a dependency on DataWings.IO and not on System.IO. We want to write a test that checks whether this file actually is written. Here's a unit test that tests this:

[Test]

public void DoSomeStuff_FileAndContentsWrittenAsExpected()

{

// Set up

string expectedContents = "Expected contents.";

string expectedFilename = @"c:\contents.txt";

string writtenContents = null;

string writtenFilename = null;

IoExtensions.FunctionGetCurrentDirectory = () => @"c:\";

IoExtensions.ActionWriteAllText = (path, contents) =>

{

writtenFilename = path;

writtenContents = contents;

};

// Test

MyCustomBehavior.DoSomeStuff();

// Assert

Assert.AreEqual(expectedFilename, writtenFilename);

Assert.AreEqual(expectedContents, writtenContents);

}



And here’s the source:






using System;
using System.IO;

namespace DataWings.IO
{
/// <summary>
/// Static class that wraps the functionalty that is found the the System.IO
/// namespace, primarily the static classes File, Directory and Path. All
/// operations against the file system are handled by action/function invocations,
/// and it is possible to set own actions and functions replacing the default
/// ones, giving you the ability to stub out the file system.
/// </summary>
public static class IoExtensions
{
#region Declarations and Static constructor

private static Func<string, bool> _funcFileExists;
private static Func<string, string> _funcReadAllText;
private static Action<string> _actionDeleteFile;
private static Action<string, string> _actionWriteAllText;
private static Func<string, string[]> _funcReadAllLines;
private static Func<string, string, string> _funcPathCombineWith;
private static Func<string, string[]> _funcGetDirectories;
private static Func<string> _funcGetCurrentDirectory;

static IoExtensions()
{
Reset();
}

/// <summary>
/// Resets this static class by setting all actions and functions back to
/// their original values where they access the functionality in the
/// System.IO namespace
/// </summary>
public static void Reset()
{
_funcFileExists = path => File.Exists(path);
_funcReadAllText = path => File.ReadAllText(path);
_actionDeleteFile = path => File.Delete(path);
_actionWriteAllText = (path, contents) => File.WriteAllText(path, contents);
_funcReadAllLines = path => File.ReadAllLines(path);
_funcPathCombineWith = (path1, path2) => Path.Combine(path1, path2);
_funcGetDirectories = directory => Directory.GetDirectories(directory);
_funcGetCurrentDirectory = () => Directory.GetCurrentDirectory();
}

#endregion

#region IO Emulation

public static bool FileExists(this string path)
{
return _funcFileExists.Invoke(path);
}

public static string ReadAllText(this string path)
{
return _funcReadAllText.Invoke(path);
}

public static void DeleteFile(this string path)
{
_actionDeleteFile.Invoke(path);
}

public static void WriteAllText(this string path, string contents)
{
_actionWriteAllText.Invoke(path, contents);
}

public static string[] ReadAllLines(this string path)
{
return _funcReadAllLines.Invoke(path);
}

public static string PathCombineWith(this string path1, string path2)
{
return _funcPathCombineWith.Invoke(path1, path2);
}

public static string[] GetDirectories(this string directory)
{
return _funcGetDirectories.Invoke(directory);
}

public static string GetCurrentDirectory()
{
return _funcGetCurrentDirectory.Invoke();
}

#endregion

#region Setting functions and actions

public static Func<string, bool> FunctionFileExists
{
set { _funcFileExists = value; }
}

public static Func<string, string> FunctionReadAllText
{
set { _funcReadAllText = value; }
}

public static Action<string> ActionDeleteFile
{
set { _actionDeleteFile = value; }
}

public static Action<string, string> ActionWriteAllText
{
set { _actionWriteAllText = value; }
}

public static Func<string, string[]> FunctionReadAllLines
{
set { _funcReadAllLines = value; }
}

public static Func<string, string, string> FunctionPathCombineWith
{
set { _funcPathCombineWith = value; }
}

public static Func<string, string[]> FunctionGetDirectories
{
set { _funcGetDirectories = value; }
}

public static Func<string> FunctionGetCurrentDirectory
{
set { _funcGetCurrentDirectory = value; }
}

#endregion
}
}

Monday, March 8, 2010

DataWings and SQL Server Identity Columns

SQL Server has the practical concept of identity – you specify a column in your table as being the identity column, and this column automatically gets populated with unique values. This mechanism is most commonly used for the primary key column in the table.

When you use such identity columns you are not allowed to provide a value yourself (at least not by default). This has been a major problem when using the DataBoy functionality in everybody’s favorite tool for data driven DataWings: oftentimes you need to set up known data consisting of several rows in different tables where the data is connected by primary key/foreign key associations. This simply hasn’t been supported in DataWings

Until now, that is: enter DataWings v 1.1. This version contains a few bug fixes, but apart from this version 1.1 introduces return value functionality. This is functionality for getting data back from the database as you insert it, and using this data in subsequent inserts or updates.

This new functionality hinges on the two new commands ReturnValue and BindColumn. I think that a couple of examples should make it clear what this is about. In these samples we assume the existence of two tables: Person with primary key column IdPerson and Address with foreign key column also named IdPerson which refers to the primary key of Person.

So, how to insert a person row and an address row connected to person row. Like this:

Guid personId = Guid.NewGuid();
Guid addressId = Guid.NewGuid();
DataBoy
    .ForTable("Person")
        .Row("Id", personId)
        .ReturnValue("IdPerson").ForImmediateUse()
    .ForTable("Address")
        .Row("Id", addressId)
        .BindColumn("IdPerson").ToLast()
.Commit();

The ForImmediateUser() and ToLast() methods are useful for quick usage where the associated columns are inserted directly after one another. The functionality also has the ability of naming return values for more complex usage scenarios:

Guid pId1 = Guid.NewGuid();
Guid pId2 = Guid.NewGuid();
Guid aId1 = Guid.NewGuid();
Guid aId2 = Guid.NewGuid();
DataBoy
    .ForTable("Person")
        .Row("Id", pId1).ReturnValue("IdPerson").AtKey("Person1")
        .Row("Id", pId2).ReturnValue("IdPerson").AtKey("Person2")
    .ForTable("Address")
        .Row("Id", aId1).BindColumn("IdPerson").To("Person1")
        .Row("Id", aId2).BindColumn("IdPerson").To("Person2")
    .Commit();

Get the bits here.

Tuesday, October 20, 2009

DataWings: Convention over Configuration

I've just released some new functionality for the increasingly popular (?) framework for data driven integration testing: DataWings. This new functionality aims at letting you register conventions as to how your domain entities map to the database tables, and in this way you can declare your assertions in a much more concise and elegant manner. The purpose is to reduce the amount of ceremony needed to execute the assertions.

Get the bits here.

An example:

The Old Way:

   1:  [Test]


   2:  public void CreatePersonInTransaction_ScopeCompleted_PersonExistsInDatabase()


   3:  {


   4:      Person person;


   5:      using (var scope = new TransactionScope())


   6:      {


   7:          person = new Person{ Id = Guid.NewGuid() };


   8:          IoC.GetInstance<IDomainObjectProvider>().Save(person);


   9:          scope.Complete();


  10:      }


  11:      // Assertion using DataWings


  12:      // The old fashioned way


  13:      DbAssert.ForTable("PERSON")


  14:          .WithColumnValuePair("ID", person.Id)


  15:          .Exists();


  16:  }






The New Way (using conventions):





   1:  [Test]


   2:  [DbTableNameConvention(DbTableNameConventionType.ClassNameEqualsTableName)]


   3:  [DbIdConvention("ID")]


   4:  public void CreatePersonInTransaction_ScopeCompleted_PersonExistsInDatabase()


   5:  {


   6:      Person person;


   7:      using (var scope = new TransactionScope())


   8:      {


   9:          person = new Person{ Id = Guid.NewGuid() };


  10:          IoC.GetInstance<IDomainObjectProvider>().Save(person);


  11:          scope.Complete();


  12:      }


  13:      // Assertion using DataWings


  14:      // The new way with conventions


  15:      DbAssert.Exists(person);


  16:  }









All conventional assertions (such as the one on line 15) are available as extension methods, so that the code on line 15 can be replaced with this code:



person.AssertExistsInDatabase();


Attribute based configuration



Notice how the assertion on lines 13 to 15 in the “old way” is replaced by just a single line (number 15) in the sample using conventions. In order to use this concise notation, DataWings will have to know how to map the object to its corresponding table, i.e. what the conventions are. These conventions are specified with the help of attributes, and in the sample above you can see how the two attributes DbTableNameConvention and DbIdConvention set up such conventions.



Such attributes may be put on methods or classes, the functionality walks the stack looking for an attribute to use. This is done by examining the method and class of each stack frame until a suitable attribute decoration is located. If no suitable attribute can be found, the conventional convention (see below) will be used.



 



The conventional conventions



You can start using the conventions functionality immediately, if you accept the conventional conventions, that is. And these conventions are:




  • Table name equals name of class


  • Name of unique key column in table is on the format Id[ClassName]



Registering conventions for mapping class to table



The convention for how the class maps to a database table is specified through the usage of DbTableNameConvention. The following example show how to use this attribute:



Class name and table name match exactly



[DbTableNameConvention(DbTableNameConventionType.ClassNameEqualsTableName)]


Class name as part of the table name



[DbTableNameConvention(DbTableNameConventionType.Custom, Convention = "TBL_{0}")]


Overriding for specific classes



The attribute also has a property EntityType, and this is used in cases where the conventions for a specific class do not match the conventions of the other entities that are in play in the test.



[DbTableNameConvention(DbTableNameConventionType.Custom, Convention = "TBL_{0}", EntityType = typeof(Address))]

[DbTableNameConvention(DbTableNameConventionType.ClassNameEqualsTableName, EntityType = typeof(Relation))]



Registering conventions for mapping primary key



The convention for how the primary key of the table is  mapped to a property of the class is specified by using DbIdConvention. Some examples:



Name of primary key column is identical for all tables



[DbIdConvention("ROWID")]



Class name is part of primary key column name



[DbIdConvention("Id{0}")]

Thursday, May 28, 2009

DataWings – Data driven integration testing





Yes, I’m now officially an open source contributor, and the project’s even got it’s logo, so you know it’s gotta be good.



So what we have attempted to do is to make a lightweight, easy to use, no set up tool to be used when testing code that sits on top of a database. With this tool, DataWings, you can set up the database so that your tests are accessing known data and assert that the database is in the expected state after the tests have executed. And all of this is done directly in the test code.



Get the bits her.



Here’s a more detailed description of what the tool does, and how it works:



First, a word of caution



DataWings is designed to be a tool to be used at design time and during testing. No attention has been paid to security issues, and we definitely do not recommend using this code in production.



Configuring the connection string



The first thing you need to do when using DataWings is to configure the connection string(s) to be used. The functionality for doing this is purposefully designed with two goals in mind: a) making it easy to set up the connection string in code in order to "get going" as fast as possible, and b) making is easy to maintain the connection string outside of code thus helping to ensure that the test will remain operative in the future.



The connection string is set by decorating either the class or method with an attribute. There are several different kinds of attributes thata can be used, but here we'll focus on ConnectionStringFromConfigFile. As the name implies, this attribute is used when the connection string is registered in the standard <connectionStrings> section of the configuration file. A typical usage of this attribute might look like this:



[ConnectionFromConfigFile(SqlVendor.Oracle, Key = "MyConnection", AlternativeConnection = "TheConnectionString")]


Here, the sql vendor (input to the constructor) dictates which ADO.NET provider that will be used behind the scenes, the Key property specifies the name of the connection string in the configuration file to use, and the AlternativeConnection property is set with the specified connection string is not found in the configuration file or if any other problem is detected while trying to look up this string.



How the attribute is resolved



As mentioned, the connection string attribute can be used to decorate both methods and classes. The process of resolving which attribute to be used is carried out by walking the stack looking for an appropriate attribute. The algorithm first looks at the executing method of the stack frame, and if this method does not have an appropriate decoration, the class of the executing method is examined. This process continues for each stack frame until a decoration is found. If no such decoration can be located, an exception is raised.



Named connection string



The ConnectionFromConfigFile attribute also has a property called Name. This property is useful in situations where the tests touch more than one database. All the static gateway classes into the DataWings functionality (DataBoy, DbAssert and Adversary - see below) have a ForConnection() method, through which the named connection attribute to be used can be specified.



Here's a sample of such a named decoration:



[ConnectionFromConfigFile(SqlVendor.Oracle, Name="Default", Key = "MyConnection", AlternativeConnection = "TheConnectionString")]


DataBoy



Standard usage



DataBoy provides functionality for keeping the data in the database in a consistent state so that the tests are running against known data . This sample shows the standard usage of DataBoy.



DataBoy
.ForTable("Person")
.Row("IdPerson", 1).Data("Surname", "Obama").DeleteFirst()
.Row("IdPerson", 2).Data("Surname", "Bush").DeleteFirst()
.ForTable("Address")
.Row("IdAddress", 100).Data("Street", "Main street").DeleteFirst()
.Commit();


Internally DataBoy keeps track of changes in a session and this session resides only in memory until the Commit() method is invoked. The table to insert data into is specified with the ForTable() method, and this table will be the one that is used for all subsequent row specifications until another call to ForTable() is encountered.



Each row to be inserted is marked by the Row() method, and this method takes the column name and the value the (presumably) uniquely identifies the row as input. The row can receive the DeleteFirst() method, and if this method has been invoked, DataBoy will delete the row from the database before it is inserted.



Order of execution



The rows of the session are traversed twice, first from back to front and then from front to back. In the first parse (backwards), any deletions are performed (i.e. rows marked with DeleteFirst()) while the insert are performed in the second traversal. In this way it should largely be possible to order the statements so that any errors due to foreign key constraint violations are avoided.



Updating instead of deleting



By invoking the ForUpdate() method an update statement (instead of an insert) will be generated and invoked the row in question. Example:



DataBoy.ForTable("Person").Row("IdPerson", 1).Data("Surname", "Obama").ForUpdate().Commit();




Just deleting





Rows can be deleted through the usage of the ForDelete() method. Example:



DataBoy.ForTable("Person").Row("IdPerson", 1).ForDelete().Commit();


Executing custom queries





The ExecuteNonQuery() method supplies a way to invoke custom queries directly against the database:



string sqlQuery = "INSERT INTO Address (IdAddress, Street) VALUES (99, 'Some Street')";
DataBoy.ExecuteNonQuery(sqlQuery).Commit();


 


DbAssert



The static class DbAssert is for asserting that data in the database is in the expected state. This class is quit similar to the familiar Assert class of many unit test frameworks.



In order to set up an assertion you first need to specify which table you are testing against. As with DataBoy, this done through the ForTable() method. When the table has been specified we need to tell the framework which row we are interested in, and this is accomplished with the method WithColumnValuePair(). This method takes a column name and corresponding value as input, and generally this will be the column name of primary key and the primary key for the row of interest. If more than one row exist for this column value pair, the first row encountered (randomly) will be used.



When the assertion has been set up, the actual assertion can be specified:



Exists(), NotExists()





Determines whether the row in question exists at all. Example:





string columnName = "Surname";
string columnValue = "Obama";
DbAssert
.ForTable("Person")
.WithColumnValuePair(columnName, columnValue)
.Exists();




AreEqual()





Determines whether the value in the specified row equals the specified value. Example:



DbAssert.ForTable("Person")
.WithColumnValuePair("IdPerson", 1)
.AreEqual("FirstName", "Barack");


Evaluate()



This method returns the entire row, and you can perform arbitrarily complex tests on the values of this row by using a lambda expression. Example:





DbAssert.ForTable("Person")
.WithColumnValuePair("IdPerson", 1)
.Evaluate(row =>
row.GetResult("FirstName") == "Barack" &&
.row.GetResult("IsPresident") == true);




Adversary



Adversary is a static gateway class providing functionality for provoking conflicts in optimistic concurrency scenarios. This code is still in a very early phase, and hopefully will mature in the future



Sql provider provisioning and built in providers



DataWings natively support SQL Server and Oracle database engines by using the System.Data.SqlClient and System.Data.OracleClient of the .NET framework. Additionally, DataWings supports SQLite through the separate assembly DataWings.SQLite. This support for SQLite has been realized by use of the built-in provider provisioning infrastructure. This is a model where you can develop support for your favorite database engine by implementing two simple interfaces. Hopefully, I'll be able to get into more details about this at a later stage.

Wednesday, May 13, 2009

Fluent Castle Windsor and Configured Parameters

Castle Windsor version 2.0 has just been released (despite the fact that version 1.0 has never existed). The biggest new feature in this release is the fluent configuration interface which lets you set up your components in code in an elegant way (as opposed to configuring in xml).

A component which in xml is set up like this:

<component
id="service"
lifestyle="transient"
service="Some.Namespace.IService, MyAssembly"
type="Some.Namespace.Service, MyAssembly">
<parameters>
<lang>Norwegian</lang>
</parameters>
</component>


can now be configured fluently like this:



var container = new WindsorContainer();
container.Register(
Component
.For<IService>()
.ImplementedBy<Service>()
.LifeStyle.Transient
.Parameters(Parameter.ForKey("lang").Eq("Norwegian")));


For more examples of how the fluent interface works, read this.



I am really enjoying the experience of using this new fluent interface; it is much easier to configure a component first time, and you get the full support of your compiler and from ReSharper. At last I am able to rename classes (through the refactoring functionality in ReSharper) without having to hunt down and fix the configuration for the component in the xml file.



Configuring parameters



Notice the parameter lang in the example above; the value Norwegian is hardwired directly in as a parameter (is presumably injected into the component's constructor). In the real world you would probably want the keep track of all such properties separately in the properties node, thus promoting reuse and easing maintenance. Your xml configuration might look like this:



<properties>
<language>Norwegian</language>
</properties>
<components>
<component
id="service"
lifestyle="transient"
service="Some.Namespace.IService, MyAssembly"
type="Some.Namespace.Service, MyAssembly">
<parameters>
<lang>#{language}</lang>
</parameters>
</component>
</components>


There is a tension between the benefits and drawbacks of having the configuration in code as opposed to in separate xml files: in code it is easier to manage the configuration while under development, while xml-based configuration supports easy changes to a system that has already been deployed. I feel that the smartest path would be to configure the components in code, but keep the properties defined in xml.



I am currently retrofitting fluent castle configuration on a relatively large application that is totally castle.windsor based. My initial gut feeling was that this new version of castle as a matter of course supported this "smartest path". Unfortunately, it doesn't, and so I was left to my own devices: enter ConfiguredParameter.



ConfiguredParameter



With the ConfiguredParameter functionality in place, I am able to configure my component as follows:



Properties in xml:



<castle>
<properties>
<language>Norwegian</language>
</properties>
</castle>


Component in code:



var container = new WindsorContainer();
container.Register(
Component
.For<IService>()
.ImplementedBy<Service>()
.LifeStyle.Transient
.Parameters(ConfiguredParameter.ForKey("lang").GetValue("language")));


To make it clear, the line ConfiguredParameter.ForKey("lang").GetValue("language") will look up the value for the configured property language, and this value will be injected into the component at lang (which presumably is a parameter in the constructor of the type Service).



The functionality must be bootstrapped as your application starts (before your container is initialized), and this will typically be accomplished like this:



InitializeConfiguredParameters.Initialize();


Here the application configuration file will be parsed, and any additional castle configuration files (included through the use of the <include> element of castle) will be included. There is a single overload to the Initialize() method where you can explicitly indicate which config file (.config or .xml) to parse, but this is mostly useful in testing scenarios.



This functionality is rather simple, and the implementation consists of just two types in addition to a couple of parser classes responsible for the actual parsing of the configuration files. These two types are ConfiguredParameter and InitializeConfiguredParameters. Below you will find the definition of the two types.



On the Road Map



The ability to use configured key value pairs from the appSettings element of the application configuration file might be nice, and I'll implement it whenever I need it.





The Code



using System;
using System.Collections.Generic;
using System.Configuration;
using Castle.MicroKernel.Registration;

/// <summary>
/// Used to access parameters that are configured within a standard
/// castle.windsor properties element
/// </summary>
public class ConfiguredParameter
{
#region Static API

private static readonly IDictionary<string, string> configuredParameters = new Dictionary<string, string>();
private static readonly object syncLock = new object();

/// <summary>
/// Adds each parameter in the incoming dictionary to the internal
/// cache of configured parameters
/// </summary>
/// <param name="parameters">The parameters.</param>
internal static void AddParameters(IDictionary<string, string> parameters)
{
// Thread safe!
lock (syncLock)
{
foreach (var pair in parameters)
{
// Skip if already contained, assuming that
// it's some kind of race condition. So, if
// the configuration contains two or more
// identical keys, one of them will "win"
// unpredictably
if (!configuredParameters.ContainsKey(pair.Key))
{
configuredParameters.Add(pair);
}
}
}
}

/// <summary>
/// Resets the ConfiguredParameter infrastructure by clearing all loaded
/// configured parameters. NB! This method should normally not be invoked,
/// and it is defined mostly for testing purposes.
/// </summary>
public static void Reset()
{
configuredParameters.Clear();
}

/// <summary>
/// Sets the name of the parameter on a new instance of
/// ConfiguredParameter and returns it
/// </summary>
/// <param name="parameterKey">The key.</param>
/// <returns></returns>
public static ConfiguredParameter ForKey(string parameterKey)
{
if (configuredParameters.Count == 0)
throw new InvalidOperationException("ConfiguredParameter infrastructure not initialized.");
return new ConfiguredParameter(parameterKey);
}

private static string GetVal(string key)
{
try
{
return configuredParameters[key];
}
catch (KeyNotFoundException e)
{
string message = String.Format("No configured parameter named {0} can be found", key);
throw new ConfigurationErrorsException(message, e);
}
}

#endregion

private readonly string parameterKey;

/// <summary>
/// Initializes a new instance of the <see cref="ConfiguredParameter"/> class.
/// </summary>
/// <param name="parameterKey">The parameter key.</param>
private ConfiguredParameter(string parameterKey)
{
this.parameterKey = parameterKey;
}

/// <summary>
/// Returns a Parameter with the value at the propertyKey in the
/// castle configuration
/// </summary>
/// <param name="propertyKey">The property key.</param>
/// <returns></returns>
public Parameter GetValue(string propertyKey)
{
return Parameter.ForKey(parameterKey).Eq(GetVal(propertyKey));
}
}


using System.IO;
using System.Reflection;

/// <summary>
/// Resposible for initializing the ConfiguredParameter functionality
/// by getting hold of and parsing any relevant configuration files
/// containing castle.windsor parameters
/// </summary>
public static class InitializeConfiguredParameters
{
/// <summary>
/// Initializes this instance by getting hold of the application's
/// configuration file (app.config or web.config) and parsing it
/// looking for configured parameters. If the castle configuration
/// of this file contains include elements, the castle files referenced
/// in these elements are also parsed.
/// </summary>
public static void Initialize()
{
string configFile = Path.GetFileName(Assembly.GetEntryAssembly().Location) + ".config";
if (File.Exists(configFile))
{
InitializeWithFile(configFile);
}
}

/// <summary>
/// Initializes the with file. Valid file types are application files
/// (app.config or web.config) as well as stand alone castle config
/// files
/// </summary>
/// <param name="filename">The filename.</param>
public static void InitializeWithFile(string filename)
{
ReaderBase reader;
if (Path.GetExtension(filename).ToLower() == ".config")
{
reader = new ConfigFileReader(filename);
}
else
{
reader = new PropertiesReader(filename);
}
ConfiguredParameter.AddParameters(reader.GetConfiguredProperties());
}
}

Tuesday, March 31, 2009

My First PowerShell Script

For a while now, I've been planning on getting my hands dirty using  PowerShell. There are at least four features that make this a pretty compelling scripting environment:

  • The ability to write scripts combining regular scripting commands  with the full power of the .net base class library (also including Cmdlets that you implement youself).
  • The ability to easily define functions as a part of the script.
  • The Cmdlet Verb-Noun naming convention, giving the scripting syntax a consistent and easy to discover feel that is completely missing in the jungle of cryptic abbreviated commands of the scripting environments of yore.
  • Everything is object based, so that when you, for instance, loop through all files in a directory by using the built in Get-ChildItem function, you are in fact accessing objects that represent the files and not just a textual path.

I hereby announce that I have completed my first PowerShell script (see below).

The good thing about being a latecomer is that you get to use the newest version, so I went directly for PowerShell 2.0 CTP3. A very nice thing about this version is that it comes complete with its own Integrated Scripting Environment, and this has been tremendously helpful in the process of understanding the basics and weeding out bugs.

So, what does my very first PowerShell script do? It recursively copies the contents of one directory to another one. As input to this process it takes a list of directory matching filters and a similar list of file extensions to be used as a filter. All in all this serves the purpose of copying the entire contents of a directory will keeping away from certain paths and files as dictated by the filters.

So by giving in this: "_ReSharper","\obj","\bin", "\.svn" as directory filter and this: ".user", ".suo", ".resharper" as file extension filter, I get functionality for copying .NET source code directories without also copying all the crud that is lying around as a by-product of the VS build process, SubVersion, ReSharper and so on.

I guess that everyone with some PowerShell experience will view this script as childishly amateurish, but at least it works .

Your welcome!


function Passes-Filter($dir, $filters) {
foreach($filter in $filters){
if($dir.Contains($filter)) {
return ''
}
}
return 'True'
}

## Checks to see whether the extension of the
## file matches any of the filters. If so
## returns false, else true
function Passes-FileFilter($file, $filters){
$ext = [System.IO.Path]::GetExtension($file).ToLower()
foreach($filter in $filters){
if ($filter.Equals($ext)){
return ''
}
}
return 'True'
}

function Get-DestinationPath($basedir, $candidate, $destdir){
$baseLength = [int]$basedir.Length
$candidateLength = [int]$candidate.Length
if ($candidateLength.Equals($baseLength)){
return ''
}
else {
#Write-Host -ForegroundColor GREEN $candidate
$rightSide = $candidate.Substring($baseLength, ($candidateLength - $baseLength))
$dir = $destdir + $rightSide
return $dir
}
}

function Copy-CodeFile($basedir, $candidate, $destdir) {
$newFile = Get-DestinationPath $basedir $candidate.FullName $destdir
copy $candidate.FullName $newFile
}

function Make-CodeDirectory($basedir, $candidate, $destdir){
$newDir = Get-DestinationPath $basedir $candidate $destdir
if ([System.String]::IsNullOrEmpty($newDir)){
}
else {
mkdir $newDir
}
}

function Traverse-Directory($basedir, $destdir, $dir, $dirfilters, $filefilters) {
# #Write-Host 'About to traverse dir: ' $dir
foreach($candidate in Get-ChildItem $dir) {
if ([System.IO.File]::Exists($candidate.FullName)){
# It's a file
if(Passes-FileFilter $candidate $filefilters) {
Copy-CodeFile $basedir $candidate $destdir
}
}
else {
# It's a directory
if (Passes-Filter $candidate.FullName $dirfilters){
Write-Host -ForegroundColor GREEN $candidate
Make-CodeDirectory $basedir $candidate.FullName $destdir
Traverse-Directory $basedir $destdir $candidate.FullName $dirfilters $filefilters
}
else {
Write-Host -ForegroundColor RED "Stopped in dir filter: " $candidate.FullName
}
}
}
}

## Script entry point
Clear-Host
$dirfilters ="_ReSharper","\obj","\bin", "\.svn"
$filefilters = ".user", ".suo", ".resharper"
$sourceDir = 'C:\depot\MyProject\trunk'
$destDir = 'C:\temp\CopyOfMyProject'
Traverse-Directory $sourceDir $destDir $sourceDir $dirfilters $filefilters

Monday, March 9, 2009

Smart Clients and System.Transactions Part 5 – Fixing the Timeout Problem

This is the fifth installment of an ongoing series about using System.Transactions as the fundament for client-side infrastructure for managing changes among loosely coupled components.

Previous posts: Introduction, Timeout, Enlistments, The transaction sink


Earlier I discussed the time-out problem which was a serious setback to the plan of building a client side change gathering infrastructure based on System.Transactions. How did we fix this problem? Well, we didn't. Instead we had to resort to cheating:

var scopeFactory = IoC.GetInstance<ITransactionScopeFactory>();
using (var scope = scopeFactory.Start())
{
// Transactional code

scope.Complete();
}


 



The above code shows how we now start client side transactions. As you can see, we no longer start a System.Transactions.TransactionScope, but rather look up a factory class (ITransactionScopeFactory) through a service locator, and ask this instance to start a transaction scope. This scope implements the interface ITransactionScope which is defined as  follows:



/// <summary>
/// A scope mimicking the API of System.Transactions.TransactionScope.
/// Defines a single method Complete() used for marking the scope
/// as "successful". This interface extends IDisposable, and when
/// Dispose() is invoked, this scope will instruct the ambient
/// transaction to commit or rollback depending on whether the
/// scope has been completed or not.
/// </summary>
public interface ITransactionScope : IDisposable
{
/// <summary>
/// Marks the scope as complete, resulting in this scope
/// instructing the ambient transaction to commit when
/// Dispose() is invoked later on. If Complete() is never
/// invoked, the scope will force the ambient transaction
/// to rollback upon Dispose().
/// </summary>
void Complete();
}


 



The responsibility of the factory is to determine the type of scope to generate, and this it does by querying the ambient transaction as to whether or not a transaction already has been started. If such a transaction does not exist an instance of ClientTransactionScope is created, and if a transaction already exists, an instance of NestedClientTransactionScope is created. The difference between these two classes lie mainly in their respective constructors and in the Dispose() method:



Constructor and Dispose() of ClientTransactionScope



/// <summary>
/// Initializes a new instance of the <see cref="ClientTransactionScope"/> class.
/// The ambient transaction is automatically started as this instance constructs.
/// </summary>
public ClientTransactionScope()
{
GetClientTransaction().Begin();
}

/// <summary>
/// If Complete() has been invoked prior to this, the
/// ambient transaction will be instructed to commit
/// here, else the transaction will be rolled back
/// </summary>
public virtual void Dispose()
{
if (Completed)
{
GetClientTransaction().Commit();
}
else
{
GetClientTransaction().Rollback();
}
}


 



Constructor and Dispose() of NestedClientTransactionScope



/// <summary>
/// Initializes a new instance of the <see cref="NestedClientTransactionScope"/> class.
/// This constructor does nothing since an ambient transaction already has been
/// started if an instance
/// </summary>
public NestedClientTransactionScope()
{}

/// <summary>
/// If Complete() has been invoked prior to this, nothing happens
/// here. If Complete() has not been invoked, the ambient transaction
/// will be marked as "non commitable". This has ne immediate
/// consequence, but the transaction is doomed and it will be
/// rollback when the outermost scope is disposed regardless of
/// if this scope attempts to rollback or commit the tx.
/// </summary>
public override void Dispose()
{
if (!Completed)
{
GetClientTransaction().MarkInnerTransactionNotCompleted();
}
}


 



The comments in the code explain the distinction between these two classes.



Commit and Rollback



The actual task of finishing the transaction, either  by commit or rollback, is the responsibility of the ambient transaction.  Throughout the lifetime of the scope, enlistments that have detected changes have enlisted with the ambient transaction. The exact details of how this enlistment procedure is done is kept hidden from the enlistments, but what actually happens is that the ambient transaction maintains a dictionary in which the enlistments are added.



When the time to commit or roll back has finally arrived, a real System.Transactions.TransactionScope is started, the registered enlistments are enlisted with the transaction, and Complete() is either invoked or not on the scope depending on whether or not the transaction is meant to be committed or rolled back:



/// <summary>
/// Instructs the transaction to begin the two-phased commit procedure.
/// This will be done except if any nested inner transaction scope
/// have instructed the transaction to rollback prior to this. If this
/// is the case the transaction will roll back the transaction at this
/// point in time.
/// </summary>
public void Commit()
{
if (!canCommit)
{
Rollback();
throw new TransactionAbortedException("Inner scope not completed");
}
using (var scope = new TransactionScope())
{
EnlistAll();
scope.Complete();
}
}

/// <summary>
/// Instructs the transaction to rollback. This will happen at
/// once if the sending scope is the outer scope (fromInnerScope == true)
/// else the rollback will be postponed until when the outer scope
/// requests a commit
/// </summary>
public void Rollback()
{
using (new TransactionScope())
{
EnlistAll();
// Don't Complete the scope,
// resulting in a rollback
}
}

private void EnlistAll()
{
var tx = Transaction.Current;
tx.EnlistVolatile(this, EnlistmentOptions.None);
tx.EnlistVolatile(sink, EnlistmentOptions.None);
foreach (var notification in enlistments.Values)
{
tx.EnlistVolatile(notification, EnlistmentOptions.None);
}
}


 



Conclusion



This concludes this series which has been an attempt at showing the benefits and problems that we have seen when realizing a novel idea: using the functionality of System.Transactions as a "change gathering infrastructure". The idea has proved viable, however the "timeout problem" proved to be a serious bump in the road, and forced us to implement code so that the actual functionality of System.Transactions only comes into play in the final moments of the logical scope.