Attention: We are retiring the ASP.NET Community Blogs. Learn more >

Fear and Loathing

Gonzo blogging from the Annie Leibovitz of the software development world.

  • Visual Studio Team System Beta 2 First Looks

    A bit of a mouthful for a blog title but I got the whole meal deal running yesterday with Visual Studio Team System. It took 2 server setups, 1 client, a few gallons of coffee, and several dozen installs, uninstalls, re-installs and compiles to make sure everything worked correctly. And boy is it nice. I installed the Suite version which gives you all the tools that the Developer, Architect, and Test flavours have so I could give everything a run for it's money. Bear with me as it's been over 24 hours since I saw B1 and I may remember things that were there and others that were not.

    First off, it's visually a little more slicker. I can't put my finger on it but overall it's looking more polished and user-friendly. The install runs more like the Express product installs and has less clutter yet more information than Beta 1. During the install you see the list of what is being installed which is nice. I think the previous one just told you from time to time what it was doing, but this shows what's going on (okay, maybe it was too many coffees for me and it's been there all along). Once inside the product, again small visual tweaks just make it look a little better. They must have some ergonomic design team measuring every pixel and trying RGB(243, 210, 120) instead of RGB(242, 210, 120) for the toolbar colours. Overall it looks nice (but then you would hope it's visually appealing considering how much money you're going to outlay for this puppy).

    The install took an average amount of time, maybe faster than B1 (once you uninstalled your previous version). Now I'm running on a 3Ghz Xeon with 2GB of RAM so adjust your numbers accordingly. It was about 45 minutes for a full install from start to finish (including a reboot after the Framework was installed). If you just want Visual Studio without the other goodies then it should take about 30 minutes. I installed the full MSDN Library as well which took another 10 minutes then 5 minutes to watch it "optimize" everything else. So give yourself a couple of hours and relax. Some days I wish these guys would include a short film or maybe an episode of CSI (Las Vegas, not the crap ripoff versions) to watch while you're installing.

    Warning! Uninstalling Beta 2 will uninstall all previous versions of .NET. Yup, while Beta 2 and the 2.0 Framework run side-by-side no problem, uninstalling it another matter. When you uninstall Beta 2 (as I had to do once during my setup fiasco) it wiped out, removed, and decided that I shouldn't have version 1.1 anymore either.

    The IDE comes up quickly (quicker than B1 anyways) and has some small but nice changes. There's a new Community menu (at least I think it's new) that takes you (via the Microsoft Document Browser) to the feedback site to ask questions, check on a question status, or go to the new Starter Kits that are being created for 2.0. The Code Snippets are still plentiful for VB and very few for C# (which strikes me as odd but I'm sure that will change).

    I only looked at the Class Designer for now (as I still can't realistically represent a SharePoint application or site in the other Designers). The biggest change here was the ability to export to an image. You don't know how much trouble I would have to go through to screen cap a model then paste it into Paint.NET just to show it to someone. There's also a Layout Diagram option that re-arranges the diagram (works for the most part) and a couple of other little things like setting auto-width on the class so all method names are visible. I wish some of these things could be set in the options but the only Designer I could see was the Domain Model Designer but it had an error on the property page. The two big things I saw with the Class Designer was showing a property as a Collection Association (which will tell you that it can't do it if it's a weakly typed collection) and showing Abstract Classes (previously abstract classes would just show as classes but with dotted borders around them). There's also a nice shortcut on the menu to show/hide member types which is handy when you're looking at the overall diagram.

    I didn't notice any new refactorings avaiable however when I was doing a quick TDD test, the infamous squiggly line would appear saying my method didn't exist and if I right clicked on it there was an option to generate a method stub. It couldn't introduce a new class but would add a method/property/etc to an existing class. When calling methods/classes that didn't have a reference (but were in the solution) there was now an option called Resolve. This gave me a couple of options to either add the using statement or fully qualify the class name. Still not as slick as ReSharper but better than nothing. Hopefully we'll see the ReSharper for VS2005 soon.

    The big thing I found is that the Team Foundation Server now allows you to do single or dual deployment installs. Previously each part of Foundation had to reside on a separate computer so I had to consume 3 virtual machines for this (each with 1GB of RAM allocated). Now you can do a single server deploy with the data and application tier all in one. This does require SQL Server 2005 and won't install alongside some products (for example you have to uninstall Portal Server but you still need Windows SharePoint Services for the project sites).

    The Team Server stuff is just plain sweet. Once you're connected to Team Server, anytime there's a warning or error you can right click and choose to create a work item which then creates a task in SharePoint assigned to someone. Very slick to assign work this way to your team (and play a king of bughill game). Test results will be posted to the associated SharePoint site and show up in graphs depicting success to failure rates, code coverage, etc. All that neat stuff that execs and PMs want to see and that we architects and developers loathe to create.

    The Test Manager is great and while some people are against generating tests after the fact, it's a nice thing to quickly get a stub created from your Domain Entity into a test project. Each run of the tests is done and results are saved. You can create your own test lists and load other test lists (like ones created for common assemblies) into your test manager to run along with it. This allows you to split up, divide and conquer any view of how you want to test your system.

    Basically the test manager being built into the system can support programmer unit tests, manual tests, automated web tests, and generic tests of any kind. I see this as giving products like QuickTestPro and even TestDirector a run for their money. Combine this with the Team Server features and connectivity and you've got yourself a full suite of tools in one package. Crunch the numbers and I think you'll find that while most people gasp at the $10k/seat price tag of the Suite, add up all those other licenses you have for the various products out there (bug tracking, test management, source control) and you'll find it's about the same in the grand scheme of things.

    All in all, an excellent product with lots of new possibilities with the Domain Specific Lanugage modellers and designers. I'm looking forward to poking at it over the next few months as we wait in antici....pation for the final release.

  • Visual Studio 2005 Beta 2

    Hmmm, wonder how I missed a blog from yesterday? Must have been one of those days.

    Anywho, the Visual Studio 2005 Beta 2 release is available to MSDN subscribers now on the download site. It includes refreshes of SQL Server 2005 and the various Team versions of VS2005 (developer, test, architect). It also includes the Suite version which includes all three as well as the foundation (the server portion of VS2005). The Suite version is available so you can see what each flavours have capability wise and let you decide which one is right for you. Regular MSDN subscribers will only get to choose one when the product is released so here's your chance to give each one a test drive.

    (image from greg hughes - dot - net)

    What's odd is that I checked to see if it was available yesterday around 3PM which it wasn't. I check later and started downloading it off the MSDN site this morning around 1AM. By the time I headed off to work this morning at 8AM it hadn't finished downloading so I decided to just start a new download at work (besides, it's a 3.5gb image and I really didn't want to burn a DVD) however checking now it's not available to me (under the same account). Very odd to see how the trickle works so I must be hitting one of the machines on the web cluster that it hasn't made it's way down yet.

    Update: 10AM and the files are there. Now another 15 hours of downloading and a few days of blowing up virtual images and we'll be all set.

    Anyways, check it out if you have an MSDN account. I think they'll be making DVDs available for a shipping fee like they did Beta 1 but I'm sure you'll be able to pick one up at any Microsoft Event like TechEd 2005 coming up in June.

  • Expanding and Collapsing Large Fields, DataView Style

    Okay, got a fun one today. I was asked by someone who had a list if the description column could be collapsed and expanded. Currently if you have a list with a description field and put the description field on a view, you're left with a large verbose scrolling piece of ugliness. After a few vanilla latte's and some dead brain cells I came up with what I thought was a pretty good solution (without having to resort to writing a custom Web Part). Here's the solution.

    1. First create the list or decide what list you're using. It can be any list but the reason we're doing this is because we want to show a description field or something that would have a lengthy bit of text.
    2. Create a new Web Part page and put the list on it with the view you want to use.
    3. Now load up the page in FrontPage 2003
    4. Right click on the list and choose "Convert to XSLT..." to convert it into a DataView
    5. Somewhere down in the page source for the DataView there's the XSLT code to display the Description field you want this to work on. In my example I have a field called Description so you'll see something like this:

      <!--Description-->

      <TD Class="{$IDAEAF1I}">

      <xsl:value-of disable-output-escaping="yes" select="ddwrt:AutoNewLine(string(@Description))"/>

      </TD>


    6. Now we want to change the output to show our collapsed or expanded text. This is done by surrounding the XSL tag with some regular HTML and a reference to some JavaScript (yes, horrors of horrors) that we'll add to the page later. So change your XSLT code to your liking but it'll be something like this:

      <!--Description-->

      <TD Class="{$IDAEAF1I}" width="200px" bgcolor="#ffff00">

          <a title="Show/Hide" href="javascript: void(0);" onclick="toggle(this);">

              <div>

                  <xsl:value-of disable-output-escaping="yes" select="ddwrt:AutoNewLine(string(@Description))"/>

              </div>

          </a>

      </TD>


      In the example above, I've done a few things: Fixed the width of the TD surrounding the Description field to 200 pixels; Given it a yellow background so it stands out; and added the reference to our JavaScript function called "toggle" in the OnClick event of the new link surrounding our XSL output.
    7. Now we'll add a simple piece of JavaScript (yeah, you knew this was coming) to the page through a Content Editor Web Part. Add it to the page then in the Source View add this JavaScript:

      <script language="javascript">

      var isCollapsed = false;

      var fullText = "";

      function trimText(text, size)

      {

          var tmp = "";

          tmp = text.substring(0, size);

          tmp += "...";

          return tmp;

      }

      function toggle(link)

      {

          if (isCollapsed == true)

          {

              link.innerHTML = fullText;

              isCollapsed = false;

          }

          else

          {

              fullText = link.innerHTML;

              link.innerHTML = trimText(fullText, 60);

              isCollapsed = true;

          }

      }

      </script>


      This will get a copy of the Description field and save it (for restoring later) and trim the text down by the amount you specify (I've put in 60 here but adjust as you see fit).

    That's it. Here's what the DataView Web Part looks expanded:

    And here it is collapsed after clicking on the Description:

    Simple and easy. Hope that helps.

  • SharePoint Podcasting in English

    Michael Greth over in Germany has put together a neat podcast on SharePoint, the first one in English. Mike has been doing SharePoint podcasting for awhile now (he's on his 10th one) but this is the first one he's put together in English (with his uber-cool German accent). You can check out Mikes post here or download the MP3 file directly here.

    If you don't know what podcasting is, think of it as audio blogging (overly simplified but its the concept). There are also video podcasts but for the most part they're usually audio (MP3, etc.). The term sort of grew out of Apple's iPod but has gone on way beyond that. It's a pretty slick concept and very interesting. In the IT world (and say SharePoint) other than news I'm still trying to figure out how it could be interesting. I used to listen to an internet broadcast audio show (via BluesNews) on Quake and the various FPS happenings out there (can't remember the name of the show) and it was always fun. Having a weekly SharePoint show is a concept too but then there's podcasting so you could just put MP3s up for people to listen to. I'm still just trying to figure out how I could make SharePoint exciting through audio because without a whiteboard and some code samples, I'm a pretty boring guy. There's a pretty cool article on making your own podcasts (as well as how to get them) here so check it out (and don't blame me if you become addicted).

    Patrick Tisseghem jumps in for a short bit to talk about the SharePoint Advisor Conference in June as well. I was planning on going to the conference but for some reason nobody got back to me quick enough to get a presentation ready (or I didn't get it submitted in time or some forces of nature prevented me to connect to the city that never sleeps) so again I won't make it (that's 3 conferences I've missed so far this year but for sure I'll be at PDC and the MVP Summit).

    Anyways, I'm quite supportive of this and will probably try doing a little of it myself (doing a text blog as well as a podcast or something). Guess my short radio DJ job so many years ago might still pay off. Watch for something silly in this space (more silly than my normal rants) coming soon to a MP3 player near you.

  • SBS Usergroups Workshop - Hosted by SBS MVPs

    Forgive me bloggers, for I have sinned. It's been a week since my last blog. Okay, so no SharePoint or DDD babble today but wanted to tell you about an upcoming event on SBS (Small Business Server) here in Cowtown.

    From Susan Bradleys blog:

    The Canadian SBS MVPs invite resellers and consultants who want to understand solutions for small business customers to join together for an evening of presentations and discussion. The local chapters of SBS and Windows professional groups across Canada are joining together with sponsorship by Microsoft to bring a group of Microsoft Small Business Server “Most Valuable Professionals” to meet with you. Each of the SBS MVPs appearing on this tour is an experienced resellers and a community leader. You can expect the same no-nonsense expertise on SBS and related technology applications you read in the newsgroups to be brought to this discussion.

    You will have a unique opportunity to speak informally or ask technical questions from some well respected MVPs from across North America, including our special guest, Jeff Middleton SBS MVP (US). For this event series (except Toronto), Jeff will be explaining how his Swing Migration method for SBS and Windows server upgrades ends working on weekends or extensive business shutdown.

    Here are the event details:

    Thursday, April 21, 2005 6:00 PM - 10:00 PM (Mountain Time)
    Welcome Time: 5:45 PM
    Language: English

    Location: Altius Centr - 2nd Floor Boardroom
    500 4th Ave SW
    2nd Floor
    Calgary, Alberta

    Here's the link to the MSDN Event page where you can register. Hope to see you there.

  • Repository Save vs Update

    One thing that has bugged me to no end (yes, I'm easily irked) is the concept of Save vs. Update. People seem to always follow the CRUD (Create, Read, Update, Delete) mentality even when dealing with objects. Take a class following the Repository pattern:

    public class Repository

    {

        public static void Update(DomainObject obj)

        {

            // Find the information to update

            // Update the values

        }

     

        public static void Save(DomainObject obj)

        {

            // Add a new item to the repository

            // Update the values

        }

    }

    So fairly straight forward. If I have a DomainObject that I have to make changes to then I call Repository.Update(). If I want to add a new item to the repository then I call Repository.Save() but that means that somewhere outside my Repository I'm determining if I need to update or save? Isn't it the same. A variation on the Repository could be calling the Save method Add instead but the end result is the same. I don't believe that any object outside of the Repository should have to determine whether or not it's a new item or updating an existing one. Maybe what I really want to do is this:

    public class Repository

    {

        public static void Update(DomainObject obj)

        {

            // Determine if this is new or not

            // If new, add a new item otherwise

            // find the existing item and update the values

        }

     

        private static void Add(DomainObject obj)

        {

            // Add a new item to the repository

        }

    }

     

    So now I'm only exposing the Update method to any callers to the Repository and let the Repository make the decision whether or not this is a new item or not. The Add is hidden and maybe I have a private method to do a find based on an identity in the DomainObject. In some cases (like if my Repository is a facade to a DBMS) then I really just need a new id back from the add then I could do a Save using that information which would transmogrify my Repository into something like this:

     

    public class Repository

    {

        public static void Update(DomainObject obj)

        {

            // Determine if this is new or not

            if(!FindById(DomainObject.Id))

            {

                DomainObject.Id = Add();

            }

     

            // Now update the item using a Mapper

            Mapper.Save(DomainObject);       

        }

     

        private static long Add()

        {

            // Add a new item to the repository

            // could be in-memory or a write to a DBMS

            return SomeMethodToAddRecordInDBMS();

        }

     

        private static bool FindById(long id)

        {

            // Do some kind of find of an item with the same identity

            return SomeMethodToFindRecordInDBMS(id);

        }

    }

     

    Does this make sense or does this just fly in the face of a Repository as well as falling into a Transaction script? Would like to see some concrete examples of using a Repository as I've only seen a couple and what we've been doing may or may not be on track.

  • Document Mover Utility

    Stumbled across a new utility that popped up on my SharePoint radar (Sharedar?). Here's the blurb from the author:

    Documents Mover is a tool that moves the files between sharepoint document libraries and keeping the version history and the directory structure of the files.

    Simple enough. I gave it a whirl and it works pretty well. Good error handling and such. It does suffer from a few problems that the author might want to consider on enhancing (and these are just my nitpicks):

    • The tool only allows you to move files from one document library to another in the same site/area. Personally I would want something that moved from one site to another.
    • The tool is another one that has to be run on the server. When are we going to be able to write tools that work anywhere? (yes, you could use my SharePoint Wrappers but it has limited functionality)

    Okay, so the second point is more of my rant about how to build tools for SharePoint rather than what the author did. Anyways, nice handy tool if you need to move documents around in a site and retain the history. Check out the tool and download here.

  • MVP for another year!

    My MVP anniversary is coming up (April 12) and like a bad penny, yes, I'm back in the clan (for at least another year).

    Normally Microsoft sends out a congratulations message then a small package with the MVP details, NDAs, etc. Again like last year, my MVP package came before the email telling me that I would be getting the package. Guess the MVP gals are a little dyslexic. No matter. I'm quite happy to be be recognized again and will continue to be as witty and annoying as I was last year.

    Here's to another year of SharePoint evangelizing, postings, blogging, rants, raves, tools, and all around good times (although my frustration levels continue to rise so I might just scratch out the SharePoint Portal Server text on my award and change it to WebSphere).

  • Code Structure with VS2005 Using Partial Classes

    Thanks to Trent on suggesting to use a partial class for implementing the auto-code part of a class from yesterday. I thought the idea of partial classes was neat but the only thing I could think of was where you had a large class so you split it up into logical components for multiple people to work on at the same time. Personally I wouldn't let my classes get that big but you never know on a large system. However it seems perfectly suited for this type of separation. Okay, maybe so I'm not the first to explore things to do with partial classes when you're bored. Cut me some slack for being a year late to catch up on things and check out:

    Making Sense of Partial Classes
    Create Elegant Code with Anonymous Methods, Iterators, and Partial Classes
    Understanding and Using Partial Classes

    Here's a sample class split up into two files. UserCG.cs and User.cs. One is for interacting with the Class Designer and the code it generates. The other is for hand coding your Domain logic.

    /// <summary>

    /// This part of the class has the code generated fields and properties.

    /// </summary>

    public partial class User

    {

        public User()

        {

        }

     

        private string name;

        public string Name

        {

            get

            {

                return name;

            }

            set

            {

                name = value;

            }

        }

     

        private string emailAddress;

        public string EmailAddress

        {

            get

            {

                return emailAddress;

            }

     

            set

            {

                emailAddress = value;

            }

        }

    }

     

    /// <summary>

    /// This part of the class has the coding implementation.

    /// </summary>

    public partial class User

    {

        public void DomainMethod()

        {

        }

    }

     

    It's not perfect as you may want to use the Designer to add a bunch of methods that should be in the Domain logic but then you could just add those by hand and keep your fields, properties and interface implementations in the UserCG.cs file. Just right click on the Class in the Designer, choose Properties, then specify which source file any new members added to the class will be added to through the New Member Location property. So now you can keep all the frilly stuff out and put your Domain logic in a single file for your class. Of course some people will recognize this as it takes me back to the old days when we had an Implementation and Interface file from our Delphi/Turbo Pascal days. Very slick none the less.

  • Changing the way you code, physically?

    I've been doing a tremendous amount of work with VS2005 lately (and getting all our ducks in a row to set things up to embrace it, this isn't just another software upgrade kids) and noticed something funny that I wanted to ask people about.

    Currently we generally follow a standard on how we organize our class code inside. Specifically the ordering of how this appear in a class, adding regions surrounding them, etc. This is our basic layout for a simple class:

    public class User

    {

           #region Fields

           private string name;

           #endregion

     

           #region Constructors

           public User()

           {

           }

           #endregion

     

           #region Accessors

     

           public string Name

           {

                  get { return name; }

                  set { name = value; }

           }

     

           #endregion

     

           #region Methods

           #endregion

     

           #region Private Methods

           #endregion

    }

     

    Generally the meat is in the middle with the public methods. Private methods live down below hidden in a region, fields up top, etc. It basically gives us some organization when jumping around in the code. Grant you these days people use the drop down method list or navigators like ReSharpers.

    When using VS2005 you have the option (it's not forced thank god) to create your classes using the Class Designer. Here's our class created with it, all pretty and nice, in the happy new designer:

    And here's the code generated behind it:

    public class User

    {

           private string name;

           public string Name

           {

                  get { return name; }

                  set { name = value; }

           }

     

           public User()

           {

           }

    }

     

    Notice something huh? The Name field was added first then I decided to add a constructor. So the code follows it the way it was entered in the designer. Even after all that (or if you created the code manually) when you say add a new string field called emailAddress then using the refactoring tools to encapsulate it you get something like this:

    private string emailAddress;

    public string EmailAddress

    {

           get

           {

                  return emailAddress;

           }

     

           set

           {

                  emailAddress = value;

           }

    }

     

    So Microsoft chose to group the private field with the public accessor. Sounds kind of sane. 6 of one, half a dozen of another. Just wondering if this is going to bug anyone where the codebase reads differently (at least from an organization point of view) than what they might currently do. Is it time to start changing our code organization efforts to accomodate for this so we don't end up with some code looking one way and some looking the other?