Attention: We are retiring the ASP.NET Community Blogs. Learn more >

Fear and Loathing

Gonzo blogging from the Annie Leibovitz of the software development world.

  • Got ideas?

    I have some spare cycles coming up and would like to offer my services for anyone looking for "small" web parts or .NET apps that interact and perform
    some function directly with SharePoint Portal Server or the WSS. These are things I want to add to the community so who better to provide ideas than well, the community. A couple of caveats to this:

    1. These are small spike projects not giant enterprise solutions so don't ask the world of me (i.e. I'm not going to build a business automation process engine for you).
    2. I cannot make SPS/WSS do things it's not designed for (like providing a web part that grants folder level security) so don't ask.
    3. There are no guarantees on the work done here and any creations are released to the community with source code for anyone to use. You're just there to get something directly and spark the ideas.
    4. This is not contract work so nothing in return is expected.
    5. Please send me your ideas, wants, desires to bsimser@shaw.ca only please.

    Bring it.

  • Wrapping PKMCDO and Adding documents via HTTP PUT

    So I spent the better part of yesterday struggling with my problem. Uploading documents to a folder is great but don't use Copy/Paste to move them anywhere (unless you enjoy losing all your version information). On Monday I'm going to check out a couple of other migration tools, but one of the things with SPIN (besides the fact it has to run on the server) is that it creates its own document library and sets up it's own META data. For us, this just isn't working so we need to look elsewhere for an option.

    I put together a simple document migration tool. I was tired of migration tools that brought over all the wonderful properties, security settings, etc. and all had to run on the server. What the hell is that all about? WSS is about the web. Anyways, there were two problems I had to solve. “How do I get all the versions out of the old Document folders?” and “How do I get all the documents into their new homes?”. Plain and simple. Just copy all versions from 2001 to a target location on 2003.

    For this I did two things. First I wrapped up PKMCDO (the COM interface to SharePoint 2001) into a couple of C# classes. This lets me access everything in a nice way and doesn't expose me to KnowledgeFolders, Recordsets and all that ugliness. I created a SharePointServer class that I can connect to and get the workspaces. This hides a COM interop class to the KnowledgeServer interface:

    /// <summary>

    /// This class wraps up the entire 2001 server for the

    /// purpose of accessing workspaces, folders and documents

    /// within.

    /// </summary>

    public class SharePointServer

    {

          private string serverName;

          private ArrayList workspaces;

          private SharePointFolder documentRoot;

          private KnowledgeServer server;

     

          public SharePointServer(string name)

          {

                documentRoot = null;

                server = new KnowledgeServer();

                serverName = name;

          }

     

          #region Accessors

          public string ServerName

          {

                set { serverName = value; }

          }

     

          /// <summary>

          /// Returns the workspace list for a server.

          /// Will load it on demand if it hasn't been

          /// done yet.

          /// </summary>

          /// <returns></returns>

          public ArrayList Workspaces

          {

                get

                {

                      if(workspaces == null)

                      {

                            workspaces = new ArrayList();

                            ADODB.Recordset rs = (ADODB.Recordset)server.Workspaces;

                            while(!rs.EOF)

                            {

                                  string url = rs.Fields["DAV:href"].Value.ToString();

                                  workspaces.Add(new SharePointWorkspace(url));

                                  rs.MoveNext();

                            }

                      }

                      return workspaces;

                }                

          }

          #endregion

     

          /// <summary>

          /// Gets the document root for a given workspace on the server.

          /// Will load it on demand if it hasn't been created yet.

          /// </summary>

          /// <param name="workspaceName"></param>

          /// <returns></returns>

          public SharePointFolder GetDocumentRoot(string workspaceName)

          {

                if(documentRoot == null)

                {

                      StringBuilder folderUrl = new StringBuilder();

                      folderUrl.Append("http://");

                      folderUrl.Append(serverName);

                      folderUrl.Append("/");

                      folderUrl.Append(workspaceName);

                      folderUrl.Append("/Documents");

                      documentRoot = new SharePointFolder(folderUrl.ToString());

                }

                return documentRoot;

          }

     

          /// <summary>

          /// Connects to a SharePoint server for accessing

          /// workspaces, folders, and items.

          /// </summary>

          /// <returns></returns>

          public bool Connect()

          {

                bool rc = true;

                     

                // Build the string for the server and connect

                StringBuilder serverUrl = new StringBuilder();

                serverUrl.Append("http://");

                serverUrl.Append(serverName);

                serverUrl.Append("/SharePoint Portal Server/workspaces/");

     

                server.DataSource.Open(

                      serverUrl.ToString(),

                      null,

                      PKMCDO.ConnectModeEnum.adModeRead,

                      PKMCDO.RecordCreateOptionsEnum.adFailIfNotExists,

                      PKMCDO.RecordOpenOptionsEnum.adOpenSource,

                      null,

                      null);

     

                return rc;

          }

    }

     

    It's still slow (COM interop always is) but it works and now I can do nice things like a foreach statement iterating through folders. I also created a SharePointFolder class which wraps up the functions for a PKMCDO KnowledgeFolder (like getting the subfolders). Here's part of that class:

     

    /// <summary>

    /// This represents a wrapper class to more easily

    /// use the PKMCDO KnowledgeFolders object for accessing

    /// Sharepoint 2001 items. It uses COM interop so it's

    /// slooow but it works and at least you can use C# iterators.

    /// </summary>

    public class SharePointFolder

    {

          private string folderUrl;

          private KnowledgeFolder folder = new KnowledgeFolder();

          private ArrayList subFolders = new ArrayList();

     

          /// <summary>

          /// Constructs a SharePointFolder object and opens

          /// the datasource (via a url). COM interop so its

          /// ugly and takes a second or so to execute.

          /// </summary>

          /// <param name="url"></param>

          public SharePointFolder(string url)

          {

                folderUrl = url;

                folder.DataSource.Open(

                      folderUrl,

                      null,

                      PKMCDO.ConnectModeEnum.adModeRead,

                      PKMCDO.RecordCreateOptionsEnum.adFailIfNotExists,

                      PKMCDO.RecordOpenOptionsEnum.adOpenSource,

                      null,

                      null);

          }

     

          /// <summary>

          /// This loads the subfolders for the class

          /// if there are any available.

          /// </summary>

          public void LoadSubFolders()

          {

                if(folder.HasChildren)

                {

                      ADODB.Recordset rs = (ADODB.Recordset)folder.Subfolders;

                      while(!rs.EOF)

                      {

                            SharePointFolder child = new SharePointFolder(rs.Fields["DAV:href"].Value.ToString());

                            subFolders.Add(child);

                            rs.MoveNext();

                      }

                }

          }

     

          #region Accessors

          public ArrayList SubFolders

          {

                get { return subFolders; }

          }

     

          public bool HasSubFolders

          {

                get { return folder.HasChildren; }

          }

     

          public string Name

          {

                get { return folder.DisplayName.ToString(); }

          }

          #endregion

    }

     

    This allowed me to get everything I needed from the old 2001 server (there are other classes for wrapping up the document and versions). The second problem was how to upload these versions to the new 2003 document library. Just upload the document. That's all I wanted to do.

    There seemed to be a lot of argument about using Web Services, lists, and all that just to upload a document. It can't be that hard. After spending a little time on Google (google IS your friend) I found various attempts at uploading documents through regular HTTP PUT commands. Here's the one that finally worked in a simple, single function:

    /// <summary>

    /// This function uploads a local file to a remote SharePoint

    /// document library using regular HTTP responses. Can be

    /// included in a console app, windows app or a web app.

    /// </summary>

    /// <param name="localFile"></param>

    /// <param name="remoteFile"></param>

    public void UploadDocument(string localFile, string remoteFile)

    {

          // Read in the local file

          FileStream fstream = new FileStream(localFile, FileMode.Open, FileAccess.Read);

          byte [] buffer = new byte[fstream.Length];

          fstream.Read(buffer, 0, Convert.ToInt32(fstream.Length));

          fstream.Close();

     

          // Create the web request object

          WebRequest request = WebRequest.Create(remoteFile);

          request.Credentials = System.Net.CredentialCache.DefaultCredentials;

          request.Method = "PUT";

          request.ContentLength = buffer.Length;

     

          // Write the local file to the remote system

          BinaryWriter writer = new BinaryWriter(request.GetRequestStream());

          writer.Write(buffer, 0, buffer.Length);

          writer.Close();

     

          // Get a web response back

          HttpWebResponse response = (HttpWebResponse)request.GetResponse();

          response.Close();

    }

    To call it, just pass it the name of a local file and the name of the fully qualified file to be uploaded on the server (document library and filename). You could also modify the code to accept a stream and read the stream in from a web page. Or process an entire directory at once. “Shared%20Documents“ is the name of the Document Library on the site. I'm not sure if you need the %20 or not, and it might be better to use a System.Uri object instead of a string here, but it works.

    string localFile = "c:\\test.doc";

    string remoteFile = http://servername/sites/sitename/Shared%20Documents/test.doc;

    UploadDocument(localFile, remoteFile);

     

    Easy stuff. Let me know if you want the full source to my PKMCDO wrappers and the migration tool. I may end up posting all the code, but I have to finish it and do some unit testing on it.

  • Working on a simple document migration tool

    I'm kinda fed up at the migration tools for SharePoint (and maybe I'm just still steamed at the fiasco I went through yesterday). Using SPIN and SPOUT was to get everything (with history) from 2001 to 2003. My plan was, once migrated into the SharePoint Document Libraries, to just copy these things down to where they belong. Now I'm on the warpath to write a proper migration tool. One step read from 2001 to write into 2003. It's all there via Web Services and the old SharePoint 2001 OM. Why is this so difficult? All the tools that Microsoft gives you are for mass migrations and assume you want to maintain security and copy all the profiles, etc. 2003 is a completely different beast architecturally and makes you think differently about the way your organize your information. Yes, there are some other commerical tools that might do this but again these are just not quite right. And to top it off, nothing ever seems to work from your desktop. You have to be running right on the SharePoint servers directly. This is just not right and I'm out to fix it (somehow). Back later with the results of my madness...

    One other note from yesterdays blog, Addy Santo has posted a commont about an an excellent deployment tool which assists both in migrations and deployments, and can take most of the pain out of the process. There wasn't much to check out but I'm going to investigate this and see if this can assist with what I'm trying to do here.

  • SharePoint 1 - Bil 0

    Okay, I'm a little frustrated tonight and have expereienced, in full featured dyno-rama glory that which I call SharePoint and the beast known as WebDAV (warning this is a little long but does contain important info about a problem with Explorer View and SharePoint). Caveat Emptor: This is the situation for me in my environment (Windows 2003, SharePoint Portal Server/WSS, Windows 2000 Client and Office 2003). Maybe I've just been up too much watching the Calgary Flames but I do think there's a problem here (and hoping my MVP buddies will help confirm/deny my findings).

    Ever since I saw the 2003 version of SharePoint, the WSS and those wonderful features my mouth was watering. What a great improvement over what we currently had. One nice feature was the Explorer View in the document libraries. Now, through the web browser, you could drag and drop files and view your SharePoint site like you do your hard drive. Great stuff.

    Tonight I discovered the pain that is WebDAV, the SPIN and SPOUT tools and how things are not always what they seem.

    We're moving about 5GB of documents (10,000 documents or so) from our old 2001 server to the new 2003 system. Great. Of course we didn't do the in-place upgrade as this was a new system so the quest for tools began. Luckily Microsoft came through with SPIN.EXE and SPOUT.EXE which exported all the version history from 2001 into XML and file formats and SPIN.EXE let you import it into WSS document libraries (or SPS areas if that's your thing). Architecture changes, heirachy changes and things are now organized differently in the new world. While the tools exported fine (although taking 4 hours to run over the network) the problem came when trying to put documents in the right place.

    So there are some bugs with Explorer View, and quite frankly I recommend deleting that view until these things are resolved or someone comes along with a better implementation of it. As I understand it, it works using WebDAV to connect to the Document Library. There are three serious problems with the Explorer View which I'll get into more detail:

    1. WebDAV can only handle a certain path length and bombs on long URLs
    2. The Explorer View in your web browser is cached and never refreshed unless you explicitly do it yourself
    3. Copy and Paste is a nightmare in versioned libraries

    Let's get into more detail about these.

    1. If you've ever gone fishing in your SharePoint sites and then clicked on Explorer View only to be informed that the view requires Internet Explorer 5 or higher then you'll know what I'm talking about. It's an odd message because when I get it, I'm running IE 6. So that's not the problem. I've narrowed it down to the fact that the path gets too long. When you get into specific folders in Document Libraries and specific views, the URL gets pretty long. IE can handle anything but when it switches over to Explorer View it starts talking WebDAV to the back end. That's where the view fails and you end up with a cryptic error message.

    2. Here's a screwy thing that took me a few minutes to figure out. Create a Document Library then go in and add a file. Great. Switch to Explorer View (assuming you don't run into issue #1) and behold your document. Flip back to All Documents view and add a folder. Delete a document. Do whatever you want. Now back to Explorer View in your browser. Hmmm. That's not right. It's the same thing I looked at a few minutes ago (sans the changes you just did). I think the Explorer View is cached and the browser is using that cached version. Hitting F5 doesn't work because that's only going to refresh the web page, not the folder view. You have to right click and select Refresh from the popup menu (yes, a different refresh from the browser one).

    3. Okay, here's the grand daddy of them all (and it gets complicated so walk with me on this). DO NOT USE COPY AND PASTE IN VERSIONED LIBRARIES! Hmm. Got the message? Here's the rundown.

    1. Create two Document Libraries (“doclib1“ and “doclib2“) and set versioning on (can be in the same site, doesn't matter)
    2. Create a text file with Notepad or something (“ver.txt“) with the words “Version 1“ in it
    3. Upload “ver.txt“ to “doclib1“
    4. Check it out via the menu
    5. Edit your “ver.txt“ file on the hard drive and change the text to “Version 2“
    6. Upload “ver.txt“ (modified) to “doclib1“ (overwriting any current copy)
    7. Use the menu and check the file in

    Looking at the version history, you now have two versions. Click on the first one and you'll see the text “Version 1“. Click on the second (and hit F5 in your browser) and you'll see “Version 2“. Also notice that there's a new path created for version 1 of the document (if you hover over the link in the document library).

    Now the fun begins. Switch to the dreaded Explorer View in “doclib1“. Select your “ver.txt“ file and press Ctrl+C (Copy). Now go find “doclib2“ and switch to Explorer View (dual monitors makes this much easier). Press Ctrl+V (Paste). Voila. You now have “ver.txt“ in your new Document Library. Wait a minute. Something isn't quite right. The version comments (if you had any) all say the same thing as the last version. Click on the file to view the Version History. You'll notice the right number of versions (two in this example, but if you had 10 in “doclib1“ you'll have 10 in “doclib2“). However they're ALL THE SAME VERSION! Yup, SharePoint created multiple “versions“ of your document but they're all copies of the latest one.

    There's actually two different behaviours here. Copy/Paste from one Document Library to another in the same site yields this result. However try this by Pasting into a Document Library on another site is a whole nuther matter. Try it and view anything but the latest version. SharePoint lists the versions but they don't exist. In fact, nothing does. It's odd. Clicking on the file produces a web page that goes to HTML Valhalla. There's no 404 error. Right click and View Source and there's a complete absence of anything. No HTML tags. Nothing. Very, very odd.

    Anyways, as I said earlier your mileage may vary but if you can confirm item #3 I suggest you do one of two things. Do not using versioning in your document libraries or delete the Explorer View. Hopefully this will help, now I just have to figure out how to move all my version histories to the right places.

  • Remembering Design Principles

    After taking a look at NDepend and running it on a few projects at work, most of the assemblies seem to either be living in the Zone of Pain or the Zone of Uselessness. While I'm not using NDepend as the silver bullet to tell me who's been naughty or nice, I am wondering how many basic design principles have been forgotten (or some cases never learned in the first place).

    The common ones that I keep going back to (and evangelise to those that care) are:

    There are others, but these are usually the most common and most violated as I keep looking at other peoples code (and my own for that matter).

  • Design Quality Metrics for .NET Applications

    I've been struggling with a concept at work. How to measure the quality of an application? It's typical to ask and there are some basic things you can look at, but for the most part you're doing code reviews or looking through architecture diagrams trying to figure out what someone built and is it good.

    Now along comes NDepend. It analyses .NET assemblies and generates design quality metrics around them. You can measure the quality of a design in terms of extensibility, reusability, and maintainability. Prety nice stuff and of course, open source (yay for OSS!). You can check out the latest version of NDepend here and check out a sample report of NDepend run against NUnit here. Neat stuff.

  • Unit Testing with SharePoint Web Parts

    In my experiences, one of the best things you can do (if you do any XP practice at all) is to write Unit Tests. These are a small, focused set of tests that perform some specific testing of the system. For an overview of Unit Tests check out this link here. In the .NET space, NUnit is my preferred Unit Test framework. Your milage may vary.

    When writing Web Parts for SharePoint, you can apply the same practice to your web part and exercise the Unit Tests using the NUnit GUI (or NUnit command line if you prefer). First download NUnit from here and install it to your development environment and on the SharePoint server.

    How you setup your tests are up to you (and what dependancies your web parts will need) but try to position the Test Fixture up as high as you can on the SharePoint tree, holding whatever information you need. Get a reference to the entire web collection if you want so that way your tests don't have to tax the SharePoint server too much when running. Then just manipulate the collection during your tests. As long as you clean up after yourself (like deleting a site after creating it) everything should be fine.

    Here's a sample test for a custom webpart that creates new sites (using the SPWebCollection class). The Test counts the number of existing sites and stores it and then adds a new site and tests the count (you could also find the test using a query or something but this is a simple example). The test uses data from the web part for input into the site creation process (name, description, etc.):

    using System;
    using System.ComponentModel;
    using System.Runtime.InteropServices;
    using System.Web.UI;
    using System.Web.UI.WebControls;
    using System.Xml.Serialization;
    using Microsoft.SharePoint;
    using Microsoft.SharePoint.WebPartPages;
    using Microsoft.SharePoint.Utilities;
    using System.Web.UI.HtmlControls;
    using NUnit.Framework;
    using CustomWebPart;

    namespace CustomWebPart.Test
    {

    // This is our custom web part which
    // has some public properties for the new site
    private CustomWebPart webPart;
    private SPWeb webSite;
    private SPWebCollection siteCollection;
    private string currentTemplate;

    [TestFixtureSetUp]
    public void Init()
    {
        webPart = new CustomWebPart();
        webSite = new SPWeb(“http://Server_Name//sites//Site_Name“);
        siteCollection = webSite.Webs;
        currentTemplate = webSite.WebTemplate;
    }

    [Test]
    public void TestAddSite()
    {
        string siteName = webPart.SiteName;
        string siteDescription = webPart.SiteDescription;
        string siteUrl = webPart.SiteUrl;
        int siteCount = siteCollection.Count;
        // Add the new site
        siteCollection.Add(siteUrl, siteName, siteDescription, Convert.ToUInt32(1033), currentTemplate, True, False);
        int newCount = siteCollection.Count;
        Assert.AreEqual(siteCount+1, newCount, “Invalid site count“);
        // delete the site we just created
        siteCollection.Delete(siteName);
    }

    }

    You can reference any part of your Web Part or invoke a method on it to help with your tests as needed. As well, once you create a connection to a SPSiteCollection or SPWebCollection you can iterate through sites or get individual sites and perform any tests on them. Just keep your tests clean and simple and only test what you need to as it relates to your Web Part.

  • Settling in and linking to the world

    Okay, I've got this .Text thing down pat now. I've been blogging for a few years now with the original sin, Blogger (wayback before Google capitalized on it when it was just a spark in Evan's eyes). Now I'm juggling a few blogs on various subjects but I'm happy with where things are.

    I did have to spend a couple of hours tonight (which I'm sure people will consider a waste) copying all my RSS bookmarks into RSS Bandit so now I've got a single source for my geek-news aggregation. I've also added the noteworthy ones to this site in the appropriate categories (mostly SharePoint). Of course this meant I had to go through an exasperating sequence:

    1. Launch the site in my browser
    2. Copy the shortcut to the RSS feed
    3. Create a new feed in RSS Bandit
    4. Create a new link in this site
    5. Paste the RSS link
    6. Copy and paste the web URL
    7. Enter the name of the site
    8. Lather, rinse, repeat

    Whew. Well, now that that's done it's over. Bandit is happily churning away grabbing news and telling me when something I might consider useful in the world changes, which is a much better model than me visiting a few dozen websites looking for something useful (and usually finding dribble). I highly recommend RSS Bandit (or similar news aggregator, but Bandit is Open Source and written in .NET) to get on top of these things.

    I've also added a new link group called Articles. .Text is a great system and lets you post articles rather than blog postings for whatever you see fit. The recommendation is to create a post in your blog about an article with a link to it, but after that blog rolls off your active list how can you access it? So the brain fart came over me to create a link group called Articles and post them there. I've added the first one that I'm pretty proud of which is a Workshop template you can use to ease people into Adopting Agile Practices into the Enterprise. It was assembled from some ideas I've had and tidbits of info I found on the net around the benefits of Agile and XP. I've applied it to a typical waterfall type development group/PM where I work with a lot of success. I'm planning on getting all new .NET development following it in the new year so if you have any suggestions on the content feel free to leave a comment in the article. I'll be updating it as I find tweaks to the Workshop.

    Finally I imported a large MVP list into my MSN so for those of you who are wondering who the heck is this guy it's me. Pay no attention to the man behind the curtain.

  • Hello World

    Well, I made it with few scars and most of my body parts intact.

    This is probably the 4th or 5th blog I've setup on the net but the first that's specific to .NET and various other Microsquishy topics. I set up home here as I'm now officially a Microsoft Most Valuable Professional (MVP), getting my award back in April of this year. Microsoft MVPs are acknowledged by peers and also by Microsoft for their active participation in Microsoft technical communities around the globe. My MVP award came as a result of my participation in the SharePoint community. Over the last couple of years, I've become quite intimate with the Microsoft Portal product introducing it to my organizating (CP Rail) and getting people doing things differently with document management and mentoring people on what a “portal” really is.

    This week I'll be hosting an Ask-The-Experts booth at the TechNet Canada Spring Tour in Calgary on June 3rd. The event will begin with an overview of best practices and strategies for ensuring client and server security. In the afternoon, we'll also review the new features of Microsoft Systems Management Server (SMS) 2003, and show how SMS 2003 integrates with Network and Operating System Technologies such as Active Directory and Windows Management Instrumentation. The event will conclude with a look at the new features in Windows Small Business Server 2003 starting with a fresh out-of-the-box installation. Feel free to approach me and chat with anything and they'll be some goodies you get courtesy of Microsoft. The Calgary event will take place at the regular location, Paramount Chinook down on McLeod Trail. Information and registration can be found here.

    Anyways, so I'm off and running on this blog now. I have to thank Scott Water and .Text for this site and the ease to post my info here. I just have to figure out how to configure this site and add more info. I'll be posting blog snippets of .NET code, SharePoint tips, Agile software development techniques and experiences and whatever else comes along the way.

  • Adopting Agile in the Enteprise - The QuickStart Workshop

    If you're having difficulty adapting agile development at your workplace or just need a helping hand this document might help out. This is an outline that I've prepared at my work to do just that. It outlines a proposal and plan for hosting an Application Development QuickStart Workshop. Feel free to adapt it for your use but it's been generalized enough so that it should fit into any IT departement.

    Introduction

    The QuickStart Workshop is meant to get development teams up and running with new application development quickly and easily and down the path to success. This is accomplished through:

    • Creating an agile development environment for the team
    • Engaging the team in an intense development session
    • Practicing best of breed attributes of Agile Development. This includes eXtreme Programming, Iterative Development, and Test Driven Design
    • Creating a solid application design and reference architecture to start from
    • Producing the first iteration of an application to continue growing it throughout the development lifecycle

    The workshop is customized to fit each project teams needs. By utilizing a series of options, you will leverage each members time for the greatest benefit.

    Why Not?

    The big question is why wouldn’t you adopt this workshop and practice it? The big reason is that the benefits listed below will be immediately recognizable in your project and your team dynamics. In comparison to traditional development you can see the differences below:

    Traditional Development Agile Development
    Long, “big-bang” delivery cycles Short, frequent delivery cycles
    Process-driven Business value-driven
    Comprehensive documentation Working software with minimal documentation
    Contract Collaboration
    Minimize change Embrace change
    Specialization

    Empowered teams

    What does the Workshop provide?

    • An understanding of the agile processes and methodology, emergence, self-organization, and adaptation
    • An understanding of how to manage emerging requirements and unstable technology and still provide guaranteed business value within a fixed cost and time.
    • A preliminary product backlog of project requirements to drive the first several iterations and an understanding of how to manage the product backlog.
    • An understanding of how to plan and initiate iterations from product backlog.
    • An iteration plan for the first iteration.
    • An understanding by the team of how to self-organize and work with emerging requirements, architecture, and design.
    • An optimized work environment and infrastructure for the team.
    • An understanding of the theory, practices, and values that underline agile processes. At the end of the workshop, the team will be well on its way to its first iteration to create an increment of user functionality for the project.

    Practices Experienced During the Workshop

    The Workshop is focused on delivery and collaboration. It helps teams get to know and follow most of the 12 core practices of the eXtreme Programming model (XP). These practices are not necessarily followed to the letter, but the intent is to provide the exposure to them where it makes sense. These are:

    • The Planning Game: Business and development cooperate to produce the maximum business value as rapidly as possible. The planning game happens at various scales, but the basic rules are always the same:
    • Business comes up with a list of desired features for the system. Each feature is written out as a User Story, which gives the feature a name, and describes in broad strokes what is required. User stories are typically written on 4x6 cards.
    • Development estimates how much effort each story will take, and how much effort the team can produce in a given time interval (the iteration).
    • Business then decides which stories to implement in what order, as well as when and how often to produce a production releases of the system.
    • Small Releases: Start with the smallest useful feature set. Release early and often, adding a few features each time.
    • System Metaphor: Each project has an organizing metaphor, which provides an easy to remember naming convention.
    • Simple Design: Always use the simplest possible design that gets the job done. The requirements will change tomorrow, so only do what's needed to meet today's requirements.
    • Continuous Testing: Before programmers add a feature, they write a test for it. When the suite runs, the job is done. Tests in XP come in two basic flavors.
      Unit Tests are automated tests written by the developers to test functionality as they write it. Each unit test typically tests only a single class, or a small cluster of classes. Unit tests are typically written using a unit testing framework, such as NUnit
    • Acceptance Tests (also known as Functional Tests) are specified by the customer to test that the overall system is functioning as specified. Acceptance tests typically test the entire system, or some large chunk of it. When all the acceptance tests pass for a given user story, that story is considered complete. At the very least, an acceptance test could consist of a script of user interface actions and expected results that a human can run. Ideally acceptance tests should be automated, either using the unit testing framework, or a separate acceptance testing framework.
    • Refactoring: Refactor out any duplicate code generated in a coding session. You can do this with confidence that you didn't break anything because you have the tests.
    • Pair Programming: Code is written by two programmers sitting at one machine. Essentially, all code is reviewed as it is written.
      Collective Code Ownership: No single person "owns" a module. A developer is expected to be able to work on any part of the codebase at any time.
      Continuous Integration: All changes are integrated into the codebase at least daily. The tests have to run 100% both before and after integration.
      40-Hour Work Week: Programmers go home on time. In crunch mode, up to one week of overtime is allowed. But multiple consecutive weeks of overtime are treated as a sign that something is very wrong with the process.
    • On-site Customer: Development team has continuous access to a real live customer, that is, someone who will actually be using the system. For commercial software with lots of customers, a customer proxy (usually the product manager) is used instead.
    • Coding Standards: Everyone codes to the same standards. Ideally, you shouldn't be able to tell by looking at it who on the team has touched a specific piece of code.

    Benefits

    The benefits of participating in the workshop come to bear quickly for each team. They are tangible and measurable and include:

    • Understanding the application design and architecture process and how a team fits in
    • Collaborating as a highly effective team working towards a single goal
    • Learning new practices and technologies like .NET, Agile and iterative development, and Test Driven Design
    • Managing change successfully by not fearing to change code in the system or completely throwing away concepts that don’t work
    • A working and fully tested system at all times which delivers and evaluates the business value on an on-going basis
    • Creation of re-usable components and services that other projects, teams, and the enterprise can leverage to reduce future development cycles

    In addition to the benefits above the workshop adheres to the following practices which are tied to direct benefits:

    Development Practice Benefit
    Capturing business requirements in a form that can be estimated and prioritized within the overall product development. Provides the business with control over the cost and scope of the project, while encouraging an iterative release cycle that puts the highest priority features into production first.
    Test-Driven Development describes the practice of writing unit-tests for a product feature before writing the code for the feature itself. The growing suite of unit-tests forms a regression test framework that is run tens if not hundreds of times per developer per day, during the development lifecycle. Unit testing is required for production quality code, hence writing unit tests before writing code ensures that all code is production quality at all times. Regression suite ensures that new features cannot break existing functionality, giving developers the courage and confidence to make major technical or business changes without fear of failure.
    Continuous Integration describes the practice of integrating each developers code changes into the overall project, as soon as their unit tests are passing and their code is complete. Continuous integration involves running the full-suite of unit tests, tens of not hundreds of times a day, alerting the team instantly should a previous piece of working functionality be compromised development lifecycle. By putting integration and testing into the heart of the development process, the financial cost of integrating and testing code at the end of a development cycle is reduced. Furthermore, since this testing cycle typically occurs immediately prior to a fixed milestone such as a product launch, it is often omitted through necessity - by integrating and regression testing continuously, this quality compromise does not occur.
    Refactoring describes the agile practice of changing the design of existing code to a more elegant solution. Rather than invest in a period of "big up-front design", an agile software development only ever implements the functionality that is required by the business, reducing time to market, reducing cost, whilst dramatically reducing the opportunities for software failure. The existence of a comprehensive regressive test framework supports this practice of refactoring.

    The mixing of quality practices and quality methods ensures that the risk, cost and scope of a software project are visible and manageable by the client. In software development, escalating costs, slipping timescales and poor quality are establishing themselves as the norm, rather than the exception.

    Engaging the Workshop

    The Workshop is a part of the application development kick-off and the first thing that happens with the project team. The requirements to engage the workshop are:

    • Use Cases or requirements around the basic goal of the system. These do not need to be completed to perfection but there needs to be some kind of idea in mind of the goal for the project. These can be as simple as the high level Use Cases that come out of the concept phase of a project.
    • A team or core group of developers with an eagerness to deliver
    • 3 to 10 contiguous days of time allocated by the group. No meetings should be booked during this time. The length depends on the size of group and project but will not exceed more than 2 weeks (10 days).
    • If possible a customer/business user or representative who is comfortable with defining requirements for a system

    Workshop Schedule

    The workshop schedule will vary from project to project but basically it kicks off with the following activities:

    1. Project Initiation and Charter
    2. Iteration Planning
    3. Shape Architecture
    4. Monitoring and Reporting
    5. Requirements Gathering
    6. Delivered-feature Contracts
    7. Facilitating Change
    8. Peer-to-peer collaboration: paired programming, test-driven development and design, unit testing, daily Scrum meetings

    How Do I Start?

    The big question would be how do you start this practice, get a workshop going and continue the practice with your team. Here’s a possible suggestion:

    1. Setup a ½ or 1 hour session to determine the size and scale of the workshop. You can orchestrate this yourself or contact a group that might help you facilitate this like a core Development Services team.
    2. Allow for 1-2 weeks lead time to orchestrate the facilities (room bookings, software installations, etc.). Have the team clear their calendar for the duration of the workshop. The workshop may finish early, but be prepared for the entire length of the workshop.
    3. Setup a 1 hour kickoff meeting prior to the first day of the workshop. This will set expectations of the workshop, discuss any questions about how things will progress, and lay the ground rules during the workshop. This ensures everyone is comfortable with the process and knows what success means to them.

    Like any journey, everything begins with a first step. It is not, however, the last step. Each project will engage the workshop differently and as teams become familiar with the practices being engaged here, the duration of future workshops will become shorter. You should provide “booster” workshops as a refresher from time to time. Not only will teams change, but the underlying practices will as do the projects and technologies involved. This will evolve as time goes on and help keep your development teams running smoothly and be effective units of work.