Tuesday, 2 November 2010

Turn off that LightSwitch, and Let's Get Naked!

So, no sooner had I blogged about VS Lightswitch (I mean literally the next morning) I discover that there is a whole architectural pattern out there that tackles the problem that I have been considering developing a framework for  - Naked Objects. The shame of my arrogance (mixed with some naivety I concede) to think that I may be the first out there to strive for, well not so much a Silver Bullet, but well, just something better for such architectural commonality.

Richard Pawson coined the term Naked Objects when he had the vision of exposing domain objects to the UI (OOUI).  Naked Objects was the result of his Ph.D. thesis, written a few years ago when I was still trying to get to grips with proper OOAD/P and .NET 1.1.

The same curiousity that got me intrigued by VS LightSwitch had me interested in Naked Objects, that is a sound belief in DDD; the presentation, data (service and whatever other) layers are simply scaffolding for the domain. Get the domain correct, your app from a business representation perspective will be correct. As a software engineer that is the most difficult thing to get correct, and is the thing that is most likely to make or break the success of your project/app.
 
UI Designers of course argue different, arguing that a good UI is critical and fundamental to a successful application (it is this stick that Larry Constantine beat the Naked Objects concept with some time back.
I am not entirely unsympathetic to their stance and I certainly think a good UI helps, but without a sound domain model a fancy UI is worthless.  The same cannot be argued for the reverse (I know since I've developed a few such apps, a good few of which were deemed a success and are still in operation today).

Interestingly, Pawson has recounted how Trygve Reenskaug himself had stated that the Naked Objects pattern was the ultimate end game to his Model-View-Controller pattern.  This certainly gives weight and validity to Naked Objects as an architectural pattern (in the West of Scotland we call this "hawners").

Of course Pawson has went on to develop his own framework for his Naked Objects pattern, namely Naked Objects and more specifically Naked Objects MVC for .NET, utilising the ASP.NET MVC 2.0 technology.  The Naked Objects MVC framework is interesting from a number of points of view:
  • The Naked Objects framework is solely centered on domain objects as indeed Pawson claims. In other words, there is no code needed from you the developer, for the presentation layer or the data layer, leaving you to concentrate on getting your domain model correct.  If Naked Objects and VS LightSwitch were ever in direct competition, in my opinion, Naked Objects would win for this reason alone. Even although the Entity Framework (EF) Code-First CTP was released before VS LightSwitch beta, VS LightSwitch is still driven from the data model, suggesting that the intended user demographic of VS LightSwitch is pre-dominently people with no proper OO experience.  Naked Objects on the other hand is clearly a product for OO developers.
  • The domain objects are also true POCOs, meaning that you can port your domain model to another framework or to your own custom framework/application at a later date if you decide the output of Naked Objects is not to your liking without any rework to your domain.
  • The Data Layer of Naked Objects is build on the Code-First component of EF, which doesn't require any data modelling or data access code, you hook up your domain model and away you go.
  • The Presentation Layer uses reflection to obtain not just the state, but also the behaviour of your domain objects in order to display the functionality on offer in your UI.  This means that there is only actually a few generic views and controllers needed in the framework in order to render this behaviour and state at run time.
  • The Naked Objects MVC product also allows you to customise your views, meaning the end result doesn't have to be a set of similar looking views obviously generated from boiler plates. Indeed, you can add on your own views to extend and customise your UI further if necessary.  (My guess is that this is Pawson's reply to Constantine's criticism's of generated, standard, no-design UIs.)
However, there are a couple of concerns I have about Naked Objects MVC at first glance:
  • The UI uses terminology such as "Create Instance", which is great for technical people, but it means business users need to learn a new set of terminology to use your application.  To me, this seems to contradict the Ubiquitous Language so keenly advocated by DDD.
  • The use of reflection for all domain objects in order to build dynamic pages at runtime does concern me a little with regard to performance. Depending on the complexity of your objects and/or intended UI page there is a possibility that you may end up with a performance problem in your web application. (I am willing to be set on this one if I have it wrong.)
  • The data layer is built on a CTP of EF.  Admittedly, CTP4 does seem to have made major advances for the reputation of the EF as an ORM, but I know my boss isn't interested in running production applications against CTPs any time soon.
  • Pawson also claims that you have flexibility in your application by allowing you to customise your Views not just by stylesheets, but by use of a raft of very helpful HTML helper classes as part of the Naked Objects framework.  He also makes a case for ultimate flexibility as the framework allows you to use your own custom Views and with every likelihood your own Controllers also.  Personally, I'm not convinced by this as a selling point of Naked Objects MVC because if you take away the Views and Controllers you have cranked yourself, for all intents and purposes you are left with the domain model you also created and the EF CTP.
As a final observation Pawson claims that Naked Objects works well for sovereign applications but it does not lend itself very well to transient applications.  Personally, I don't think that application posture is as applicable today as it once was; today's end users don't just use software as a job tool, but as a medium that is part of their day-to-day social interaction. As a result, their expectations are higher and sub-consciously or otherwise, the expectation (no matter if they use it 7 hours per day or just in passing) is not just on how it looks, or what functionality it offers but how it actually behaves (usability).  If the UI generated by the framework is rigid in structure and not intuitive to users, this could limit the adoption of Naked Objects frameworks such as Naked Objects MVC.

My previous observation of having a sound domain model and not necessarily having a good UI should be qualified by the times in which this software was written.Going back to touch on Constantine's point about the UI being vital; end users these days are not expecting a good UI as apposed to a good functionally sound domain implementation, or even vice versa - they want both.

Saturday, 30 October 2010

Enlighted? Certainly Not At The Flick Of A Switch.

Recently I watched the VSLive presentation on Microsoft's new development technology - Visual Studio LightSwitch - and I was more than a little intrigued (and admittedly a little nervous) at what I was in store for.

Before getting down to technical detail I wanted to mention Microsoft sessions in general.  As I expected from Microsoft, the presentation and the demoes were slick and polished, but I continually can't help but feel I'm being presented to by a couple of marketing guys who have been schooled by techies. I far more enjoy being presented to by an academic than being sold to by a salesman.  Maybe its just me?


OK, so on to LightSwitch.  The main questions I had was: What is it?  and What problem is it trying to solve?

As I say I was intrigued by LightSwitch as I had read some material explaining that LightSwitch was the solution for straightforward CRUD applications (or the CRUD elements of applications at least), that LightSwitch would auto-generate CRUD code.  I have to admit this initially had me excited .
You see, most of the bespoke applications I'm involved in tend to consist of 2 functional aspects:
  1.  the implementation of the specific business domain (i.e. the part of the application specific to support the business process) 
  2. and the maintenance of the supporting data for that business process - the CRUD element.  
Designing and implementing the business domain is the interesting (and dare I say it, creative) part of development process.  The CRUD element tends to be the necessary evil for many applications as it tends to be unimaginative, unchallenging and repetitive. As a developer, anything repetitive I try to automate or design my way round (I usually end up trying to let inheritance and the O-R Mapper take the strain).

Now here was Microsoft apparently releasing a tool to automate this for me.  Sounds good.  However, I still had reservations, namely
  1. Is it easy to integrate the specific, interesting (and sometimes complex) business domain code of your application with the CRUD code auto-generated by LightSwitch?
  2. Integrating and re-using objects is one thing, but is it possible that you can scale such drag & drop solutions?  And what type of architecture am I going to end up with?
Taking these one at a time, Microsoft claim that you can indeed integrate your custom, complex business code easily by generating your  CRUD code, and then simply digging into the C# or VB code generated by it.  Again, sounds good doesn't it? Well, on reflection I'm not so sure as it means you face the same problems that many other code generators create (perhaps T4 templates aside, but thats another blogpost already in the pipeline) in that you have no control over what that code looks like, meaning that your objects may end up in a shape that you don't want, and don't allow you to easily extend.  Then there's the issue of continual refactoring of code.  What happens with LightSwitch when you generate you CRUD code complete with your nice new screens, extend the code by adding some custom business logic, and then discover that you need to regenerate your whole CRUD code again via LightSwitch?  What happens to that expertly written custom business logic that you are so proud of?  Do you have to re-insert it?  Will it still fit?

Moving on to the non-functional questions over flexibility and scalability, again Microsoft claim that LightSwitch allows you to deploy your end solution as a Silverlight app, which definitley gives it reasonable flexibility over the target platform.  The demo also covers how to scale this from being a simple single user executable during development to being a multi-user deployed web app.  Again, the demo makes it look so simple, but I think that is mostly down to the simple demo. In reality, most applications tend to be more complex than that.

All of this aside, the one question that remains unanswered is: who exactly is this product aimed at? Who is going to use it?

The LightSwitch demo host presented a strong case for allowing business users to build their own apps organically, and only when the app graduates from being a single user or department wide app to an enterprise app should us developers get involved.  Traditionally, this has been done using Microsoft Access in its various incarnations, and I've experienced such situations, and I can understand the argument that says the best applications "grow" from this type of root.
So, is this an Access replacement?  Well no, because its a Visual Studio component and business users don't usually have Visual Studio on their standard desktop, but they do have Office, which is why Access had/has such a high adoption rate.
So, then its a developer tool. And here is where that nervous feeling comes from....are Microsoft suggesting .NET developers hand over control of the design and architecture of their enterprise scale software to a code generator?  (Not only that, a code generator that seems to focus on state and not necessarily behaviour at that?!)  Just when we are finally making some headway with .NET being taken as a serious development platform, some innovation, some serious thoughtleaders using the .NET platform and out there being heard, and Microsoft comes back advertising a new way of pointing and clicking your way through your professional career.  I wonder if its too late for me to start learning something like Ruby On Rails?

(thanks to restoreus .org for use of the image)

Tuesday, 7 September 2010

Don't Mock Me, I'm a Classist

I have to admit to being an immediate convert with NUnit as soon as I was introduced to it around 5-6 years ago. So much so that not only could I not conceive ever developing without it, but (rather arrogantly) I couldn't understand why everyone wasn't fully test-infected, just like me.  Forget the fact that it had been around the Java world for some time already and I just hadn't the presence of mind to take note - no, no, no. This was it. This was my Ground Zero, and why wasn't everyone else in on it? Heathens!
I got off my high horse sometime later and thankfully, over the past few years I have noticed that in the environs that I work, that automated unit and/or integration testing has slowly but surely become more popular.  All teams that I work with just know that it is a given; automated testing is not up for discussion, its just there.  Its like a joiner turning up for work without a hammer - it doesn't happen. Its such an inherent part of implementation that when I ask developers for implementation estimates, these estimates should (and will!) include time for unit testing.  I don't think its fully emersed to that extent everywhere just yet, but at least it will be in the good dev shops.

With such growth in popularity comes not only new ideas, but new opinions on how things should be done. When it comes to testing frameworks, the new ideas and opinions are usually based around using a mocking framework (such as RhinoMocks, NMock or the more recent Moq) rather than just simply NUnit.
I have tried mocking frameworks on multiple projects, and personally to this day I am yet to be fully convinced.  There are 2 separate elements of functionality offered by most mocking frameworks:
  1. Using a mocking framework to generate stubs.
  2. Using a mocking framework for mocking.
Let me make this clear, using a mocking framework for stub class generation, I love.  I'm sold.  It saves me time and effort, and makes my tests simpler and more lightweight, but using a mocking framework to generate stubs is not mock testing.  Fowler's paper Mocks Aren't Stubs explains why so much better than I could ever hope to.  However, providing the additional stubbing functionality has ironically muddied the waters for a lot of people on what mocking actually is.  And that is a shame because I'm still looking for someone I know to successfully mock their production code, providing adequate code coverage and flexible test harnesses, such that I would be convinced to give it another go.

Onto the mocking itself.  The flexible tests harnesses in particular is where I have trouble with mock testing.
Classic unit testing is easy to understand and adopt, with the test-infected mantra of Red-Green-Refactor.  The way I see it, the refactoring step becomes more difficult when you have tests mocking the internal behaviour of your classes; to me your tests are inherently brittle when you are double-checking the inner workings of your code.  Now, guys like Ayende have forgotten more than I will ever know, so I don't doubt their concepts but to this day, every time I sit down and try to test my code by mocking, I can only see brittle tests in front of me - and that is why, as Fowler says, I'm a classist rather than a mockist.

On a lighter note, on first read of Fowler's Mocks Aren't Stubs many moons ago, I thought that "traditionalist" would have been a better term rather than "classic" or "classist" to describe the original automated testing practice.  Second thoughts led me to think otherwise.  Modern classics will always be there, and will always be used, because...well, they're classics aren't they? I will always listen to The Stone Roses first album, and Teenage Fanclub's Bandwagonesque.  Not because they are traditional, but because they are just good (no matter how much some kid in skinny jeans tries to tell me otherwise). Its the same with classic unit testing.  Sure, the tools may become better, and fancier but the basic concept is sound and will always be practiced.
I'm not sure whether Fowler gave too much thought about his terminology, but in this profession, where it is difficult to convey conceptual ideas well, naming something correctly is vital. Whether he meant it or not, he is on the money (and not for the first time either).

Sunday, 20 June 2010

Technical Reveal All: Automate Database Build With Team Foundation Build (part 1)

My previous post was mostly a rant about the vagaries of working with VS 2010 (Ultimate) - TFS and Team Foundation Build specifically.  Just to recap: I was struggling to get my automated build with Team Build to incorporate the necessary database build and deploy (including test data insert).  I could manually build and deploy the database, and insert tests data by running the Test Generator to kick-off the pre-defined Integration Test Plan.  I could even tie all this up locally to automate these steps when running a test build (via the TestSettings file).  However, the difficulties come when trying to achieve the same thing on the build server.

As I confessed during my complaints-based post, I suspected it was as much about my lack of experience as it was about the capabilities of the Studio build tools.  I readily admit that this may well still be the case and my solution isn't the most eloquent. Well, my build works just fine no matter how ineloquent it is, and I won't have the time or inclination to improve it until it either breaks or I attack this whole Application Lifecycle Management (ALM) issue on another project. 

However, what alarmed me more than anything was the lack of readily available information on this particular topic when I googled.  Therefore, this post is designed to offer my experience and my way of achieving what I consider a basic and essential automated build feature in .NET.

Getting Started: Database Projects (Part 1)
This is the first time I have used Studio database projects. The majority of the projects I work on are object-centric (that is, the database is simply for persisting data and nothing more - no complex stored procedures and certainly no database triggers to be seen on any of my projects if I can help it).  As a result, up until now my database code has been deliberately sparse; a database creation script, table creation scripts, and test data scripts for unit/integration tests of the Data Mapper as well a batch file for effectively bootstrapping the database build as part of the automated build. I'm not a database guy so the simpler and less there is, the less that can go wrong and confuse me.  My Dad always used to use this when justifying his car puchases: the less working parts or gadgets, the less there is to fix.  (The old man had a point, I just drew the line at the Seat Marbella he had for a while.)
However, I thought I'd give the Studio database project a go to get into the spirit of the whole MS toolset.  I have to admit that what I've found so far has been pleasing and easy to deal with. I can extract the scripts directly from a working database to include as part of my deployment to test or live.  What I actually end up with doesn't amount to much more than what I get with my scripts written directly, but its easier getting there and I don't have to debug them to ensure that they work- they just do (I include test data generation in this also).

So, first of all create a new database project as part of the solution.  Secondly, go to the Schema Explorer and create your tables much in the same way you would in SQL Server using the Management Studio user interface.  Then, build and deploy your database from the Build menu in Studio.  What this does for the uninitiated like me is the following:
  1. The build creates creates your database scripts for you based on what you have produced in the Schema Explorer
  2. The deploy runs the database scripts just created in the build step
  3. When you build the solution, the database build is run like any other project and when you run your solution, the build and deploy are both run (not as build pre-requisite tasks just as an integral part of the solution)
OK, so this is all good but now you need to harness the true power of database projects to make their adoption worthwhile - data generation plans.  When you look at the database project within the Solution Explorer in Studio you will see a Data Generation Plans folder (which will contain *.dgen files).  You can have multilple data generation plans (i.e. different configurations) for different purposes (unit testing, integration testing, system testing, etc).  The Data Generation Plan window is split into 3 panes:
  • Table View
  • Column View
  • Data Generation Preview
To build your data generation plan you essentially select each table from your schema in turn via the Table View and then configure the data to be automatically generated for each column of the table using the Column View.  You can manipulate individual columns by editing the column properties.  This is very flexible as the data generation properties of string columns allow use regular expressions to generate meaningful data, even selection of one of a number of parameters (achieved by use of pipes, e.g. - "string1 | string2 | string3") and data generation properties of integer columns allowing upper and lower bounds for numeric values.
You can then preview the data that will be generated using (you guessed it) the Data Generation Preview pane, and then generate the data (insert the data into the database tables) via the Data | Data Generator | Generate Data menu.  After selecting your database connection of choice your database is now populated with the test data. Whats more, it is saved to a .dgen file to re-use.
To automate this is reasonably simple once you know how, but still not as intuitive as I would like. 

Ok, so automated data generation using Data Generation Plans is useful for generating data. This data doesn't necessarily have to be for testing purposes I guess, but more than likely it will be.  However, so as to not tie the data generation to test data only, the data generation plan (part of a database project) can be hooked in to a test project via a context-sensitive menu for Test projects; Test | Test Database Configuration.  From here you can configure the database scripts to deploy the database every time a test build executes as well configuring a data generation plan to run to insert test data after the database has been deployed.

And there we have it, a way for getting the database build and deploy (including test data) as part of the full automated build.  Well, on the client at least....Part 2 looks at automating this on the server.

It should be noted that this is not something exclusive to Studio 2010. As far as I'm aware, database projects and Data Generation Plans have
been around since Visual Studio 2008, albeit in the Data Edition.  Differentiating between object code and database code in separate developer editions was not a smart move by Microsoft in 2008.  It may have meant additional licence sales but it didn't match with the make-up of an average commerical .NET development team - application developers need to write database scripts sometimes too! 

(Thanks to ChildOfThe1980s.com for use of the Johnny Ball image.)





Powered by ScribeFire.

Saturday, 15 May 2010

Easy Like Thursday Morning

So it was Thursday morning. I was tired and looking forward to the weekend (weekends are Fri/Sat here).  But I knew that if I didn't get this automated build going before the weekend arrived I just wouldn't be happy. All weekend it would be at the back of my mind  (not ncessarily because I'm that work-conscious or obsessed, I'm just built that way, maybe a little OCD or something I don't know)  So I eventually get things up and running, compilation, database build, test data set and all tests passing, green light - woo-hoo!  I'm proud of myself, got this licked before I clock off for the weekend - ain't it rewarding sometimes being a developer?  And it only took me about 4 days to do, and the nightly and weekly builds shouldn't be much more than...hold up.....lets go back a wee bit here....four days?  FOUR. DAYS.?!?

Yep, you read it right.  I was patting myself on the back after taking four days to get my automated build going.  Something that normally takes me a couple of hours, possibly a day at most if the build is convoluted and I'm having a bad day. And I'm self-congratulating after a P.B. four dayer?  And then it hit me like a dull blow to the back of the head. What the heck have I got to celebrate about?  You see I was too busy basking in the pleasure of achievement after so much pain of failure, to immediately realise how ridiculous this was.

Let me explain...

Recently I started my first proper working project in Studio 2010.  This is good for me, I'm usually at least a year behind the release year of Microsoft development products by the time I adopt/migrate. 

Warning: this maybe where the sun stops shining in this post.

To my dismay I'm using the Team System suite of tools (due to technology policy restrictions in my working environment) which means TFS, TF Build, SharePoint, etc.  I want to make it clear before I go any further that I'm not a Microsoft basher - goodness knows they don't need another to add to the pile and besides, I've been working with the MS toolset all of my professional life - I just normally prefer to use other tools to help me with my .NET development.

(For the record when I refer to TFS here I am talking about source/version control. I'm not sure if this is corrrect or if TFS is the full bhuna server product. That confuses me also, and life is simply too short for me to trawl through the MS marketing guff just to clarify). I've used TFS on a client site before when the client insisted on it.  I swore never again. Of course that was unrealistic working on service contracts but it did hurt at the time. A lot. Fowler's post on version control tools sums up nicely where TFS sits in the grand scheme of things, and I couldn't agree more.  Anyways, here I was again but I resolved to make the best of it.  It was the 2010 TFS - perhaps the newest version had improved? Bearing in mind I'm only 2 weeks into using TFS this time, here are the highlights of my experience so far, judge for yourself:
  • We had compatibility problems with TFS 2008 and TFS Team Build 2010, so we had to install TFS 2010 onto a new server before getting started proper;
  • There is a machine name size limitation on Server 2008 that gives you problems with TFS.  Sure, there is a warning when you apply your new machine name, but when you try to configure TFS it simply doesn't allow for you to point the TFS database to another instance of SQL Server.  Not until the machine has been re-christened at least (and not a hint of this being the reason in sight). There shouldn't be a warning, just a size limit on machine name and this could all be avoided;
  • Synchronisation.  This is what gave me the nightmares from my previous experience. Synchronising between your local workspace and the repository confuses the hell out of me with TFS.  Am I alone?  Well other team members were also having the same problems immediately.  On a colleague's machine on first checkin some files were deleted from the local workspace before commit.  Thats some files by the way.  Not all, only some.  I'm sure there was something common with why these files were deleted and not others, but I couldn't figure it out, and the developer in question was too busy crying into his hands to care;
  • TFS has the concept of site collections - source control paired with SharePoint to give a "single collaborative working environment" (or something to that effect). Each project therefore has a collection.  You can assign users to individual collections or as administrators to the whole site.  If you want to assign a user to a collection it seems you can only do this via Integrated Windows Authentication. A simple username and password would do for source control surely? Or even the option to choose between the two. Its not a biggy, but it doesn't encourage flexibility either;
  • And onto my favourite.  I spent a half hour on this one alone trying to figure out what was going on so that I wouldn't repeat the mistake again. I didn't, and I probably will. It's little ditties like this one that contribute to four days of suffering.  Another synchronisation issue (at least I think it was).  I had a folder in my solution that I couldn't remove.  Not a solution folder, but a folder that is normally created on the local filesystem by Studio.  I couldn't remove it or delete it as it was being held onto by another process.  Which was strange because there wasn't a local copy on the filesystem and there wasn't a copy in the repository.  So nevermind why I can't remove it or what process is holding onto it, why is it actually still there in the first place?? (By the way, can anyone explain to me what the difference is to Studio between remove and delete?  And then can anyone tell me why I should care?)
I'm sure the synchronisation issues are naivety, a simple lack of experience of the tool, that will dispate gradually with daily usage, but should it be this hard?  Subversion is a great example of how less is more with source control.  TFS seems to me to be a greate example of just the opposite. 

Onto TF Team Build. 
Getting the architectural proof-of-concept up and running was relatively straight forward (I like the RUP terminology for this one, although a friend and ex-colleague has a nice analogy and term for this). We are using ASP.NET MVC 2.0 for the first time and my experiences of this so far are very positive (I hope to blog about this soon).  So even with ASP.NET MVC, getting the build up and running with unit and integration tests passing locally using MSBuild was easy.  Then on to making this happen on the server...

It seems compiling and executing unit tests on the integration server using TF Team Build is straightforward. Building your database however is not.  I originally had a simple SQLCMD batch file which kicked off my database build and another for inserting my test data for data layer integration tests.  These batch files were reliant on relative paths and the batch files being copied and executed from a particular folder.  On the server, this was difficult to achieve.  It seems that Team Build likes moving and organising binaries in different folders for the purposes of build agents and controllers. And binaries seem to be treated differently from supporting files and folders.  No, I don't know why either. After what seemed like an eternity wrestling with the fact that the build on your server does not behave exactly like it does on you local development PC I eventually succumbed to Visual Studio's way of doing things with database development - database projects.

This is my first experience of database projects so I'm not 100% sure on them yet, but it did allow me to compile my database project like all other projects in my solution.  I initially thought this was good, but then I realised that compiling my database project didn't mean building my database as I interpreted it.  It meant "compiling" (deriving) my database scripts from the local copy of my database.  I still had to deploy my database.  Fair enough, my incorrect interpretation of the terminology.  Again, this was ok locally as accessing this deployment is a menu option of the Build menu for database projects, and as a project I can configure it to deploy after compiling when compiling and executing the whole solution.  But this is locally.  On the server, the behaviour is different....again. So I'm essentially back to square one because my server build is different from my local build.  I eventually resolved this issue after initially banging my soft head against a hard wall and then figuring out about XAML build templates with Workflow Foundation for executing server builds. Thats all great if you have XAML and/or WF experience, but if you are like me, its a pain that I don't feel I have to endure just to get something as fundamental as my automated build to behave on the server the same way it does on my development machine (cue more of head and wall meeting at substantial velocity).

By this point I stupidly think I'm close to getting a fully working build only to discover that something different and additional has to happen to populate my database with test data after deploying it.  Again, locally this is configured in the same way within Visual Studio under Database Test Configuration (its even on the same form!), but on the server build this is separate special case and has to be configured as such.  The end result is that I improved my understanding of XAML and build process templates, if only a little and at considerable pain, but I did get a populated database for my automated build.

This brings me up to my false moment of elation with my fully automated build for continous integration using VSTS on Thursday morning.  That took me a painful four days (did I mention that already?)

Earlier I referred to preferring other tools. I prefer them not simply because they are not Microsoft-based, but for a number of valid reasons:
  1. They are unintrusive - once my automated build is up and running, I only use CruiseControl (my CI tool of choice) for feedback. Thats the way it should be.
  2. They are cheaper -  the majority of the other tools I stated that I prefer happen to be open source based.  You can't get more cost-effective than free.
  3. They just work - I don't normally burn a week configuring fighting a CI build.  It usually takes about 2 hours.  Again, thats the way it should be.  My time would be better spent solving complex design issues, not scaffolding problems like these.  Another way of looking at it is this:- try using this as the justifiable reason next time you are late with delivery and see how much your customer cares about the complexities of your build tool.
I have 3 issues with the Microsoft toolset that get me really perplexed:
  1. The first is that this can so easily be avoided.  The other tools I have referred to and use don't make things this difficult, why should Microsoft?. 
  2. Do Microsoft use these afore-mentioned tools to support their own development?  The frequency of Silverlight releases makes me think otherwise (for the Silverlight team at least).  If they do, why aren't we furnished with great examples of complex and convoluted scripts and recommended configurations?  Convention over configuration seems to be the way to go.  Don't Microsoft agree here?
  3. Thirdly (and this is fundamental) what are Microsoft saying to their developer community?  It can't be that they don't agree with good practices such as Continuous Integration because recently I've come across some of their blurb about TDD and CI.  Making the compilation and deployment of your database part of the full CI build is essential, not optional or a pipe-dream.  Why then is it so convoluted (and/or secretive) to make it happen?
As I said before, I'm not anti-Microsoft (as much as this post indicates to the contrary) and maybe my limited understanding of these tools means that I may be getting things wrong or miscontrued. If I am, I welcome being informed of better ways to do this. Maybe I just never found them in my rush to get something working inside 4 days ;-)

(Its not me in the picture, personally I suspect its Tom Green. Thanks to FotoSearch for use of the image.)




Tuesday, 27 April 2010

ID the Identity

Since I stared working in my new place I've come across the same sort of design issues that I encountered in my previous company.  This is a post originally from my previous blog.  I wanted to highlight this issue again since it has reappeared in my conscious but I also wanted to take the opportunity to update it with new learnings, so I moved it here.

One such common design issue is the matter of object identity and how it is used within the application under development.  What I have seen all too often is the following:

The Primary Key reference from the database record is the object identifier!


So, hands up who recognises what is stated above because they have done it in the past, are currently supporting a legacy app with this nuance, or are still doing it?   So, as eveyone heard all too often as a teenager "everyones doing it, whats the problem?"

Well for a start I've tried not to use the term "problem" up until now because, well for one I'm just not the dramatic Graham Norton type, and two I'm sure if you are doing this on your current development project it probably won't have manifested itself as a problem.  Maybe not yet at any rate. 
But consider these..

By exposing your primary keys as application IDs, you are writing this identifier in stone.  Of course your identifier should be immutable, but  this cannot always be guaranteed with the database sequences that are used for primary keys.  Or, for many successful (and therefore large) systems, the database has to be scaled up into server farms so GUIDs are recommended to guarantee uniqueness even across multiple database servers.  Who fancies having to remember a GUID for an object identifier?

The reason above is looking at this issue from a purely technical view point.  The rest of the reasons below consider the functional aspects of such an implementation and (in my book) are all the more convincing for it.

Consider this - database primary keys are quite simply a database storage mechanism.  They are a handy way to uniquely identify individual records in relational databases.  Relational databases in enterprise applications are for persistence and nothing more.  Why let your persistence mechanism bleed into your functionality?  The primary key value is not application data and therefore should not feature in the application.  So, on all projects I have a say on, I now vehemently insist on making the primary key ID a private or protected variable.  For anyone who hasn't previously worked on a project with me, they usually look confused or think I'm making a big deal out of nothing at this stage.

Still not convinced?  OK what about this - by exposing the primary key of the record you are unnecessarily exposing your application implementation to your users.  This may not be a big deal to all developers and all situations, but to customers who want their applications to be as secure as possible, can you truthfully say that you have mitigated every possible security risk when you are advertising database IDs?

Not only have you unnecessarily introduced a potential security risk, you may be unintentionally misleading your customers.  Listening to an old DotNetRocks podcast another reason why this is not a good idea was highlighted - your customers can be misled as to the meaningof the identifier.  The example given in the podcast (thanks Richard & Carl) was that the customer was upset that they had their own customers identified in their system using the database primary key.  When their biggest customer was ID 372 it was a disaster and insisted it had to be changed.  No matter how arbitary that ID is to a developer, perception is everything in business.

And finally for the reason that has me so cantankerous on this subject - in Eric Evans' DDD book he explains
"it is common for identity to be significant outside a particular software system"
What Evans encourages with Domain Driven Design is to let the "domain drive your design".  Nothing insightful there, and it sounds quite simple right? So why isn't this fact enforced in so many software designs? What Evans failed to push home is that when you do have a significant identity outside the software system, the significant identity should be used as the object identifier.  By designing your object model to incorporate such identities you are not only accurately aligning your implementation closer to the business domain, but you are gaining and sharing a better understanding of the business domain with anyone else using this object model (however subtle it may be).  Simply taking the easy way out by using a database primary key is not only encouraging a lazy solution, but is discouraging your development team to realise a better and more accurate design.

Of course Evans goes on to explain that although this is indeed common, identities are sometimes only important in the system context (exceptions to every rule of course).  And that is ok and should be expected also. The main point I am trying to emphasise is that you should always look at the domain to identify an identity for your object.  When there isn't a logical one in the real world, well of course you should have a unique identifier mechanism. Absolutely, but please just do not make it a database primary key!

Monday, 19 April 2010

Reboot


After many false starts, I've decided to start posting up. I moved abroad to work around a year ago, and I'm only just at the point where the job and life have settled down enough for me to start contributing again.

Working for the same company for over a decade has its advantages and disadvantages. However, for me most of the advantages came back to one thing - comfort. That's OK, but I did want to try something else before I ended up too comfortable. So, rather than taking the small step to try another company nearby I took a giant leap with the offer of a job on the other side of the world. Nothing iterative or incremental here.

The national cultural differences are obvious and immediate for anyone to see - some better, some not so. I was looking forward to a change and I certainly got it in these respects. However, the company cultural differences are drastically different - well, this is where the comfort factor I had up until now had mostly come from after all.

Obviously, some of a company's cultural differences are shaped by the country's culture and attitudes. Of course they are, but not everything. After working for a software services company for so long, I am now working on a large managed services contract on behalf of my new employers. Still service offerings right? ("Same, same" as I've heard often out here). Well yes and no. The difference here is that this managed service contract has effectively replaced what I know as a "bodyshopping" contract.

And this is where the massive diverge in culture exists. Believe me, the differences between what we are trying to do and the inherited culture of this team is enormous. It should be pointed out that we have and will have a lot of things to learn in return. Its certainly not all one-way. I hope to interlace some of my posts with highlights of some of these differences, and what we are trying to do to change things for the better as I go.

But hey, I wanted out of that comfort zone right?