Tuesday, May 5, 2009

Rant – It’s the little charges that show a company’s true character

I use a certain well known VOIP service that starts with a V and ends with AGE.  You know who you are.  Apparently, they’re more than willing to let you move up to a more expensive plan at any time.  In my case, I temporarily moved to a plan that included unlimited worldwide calling.  The plan was active immediately and there was no charge to change plans.

When I moved back down to the normal domestic-only plan, however, I was told the plan wouldn’t take effect until the start of my next billing cycle (hehe.. I accidentally had typed “bilking cycle” there), and there was a $9.99 charge (plus tax) for an “activation fee”.

Honestly, I would probably not have been so bothered by this if they had also charged me to move to a more expensive plan.  But this just reeks of greed.  It’s little charges like this that show you a company’s true character.  Don’t ever think a company like this is in it for the customers.

Tuesday, April 7, 2009

CMS Project Update – DDD Analysis Paralysis

I’m in the middle of my re-write of the CMS project I blogged about earlier.  Here’s a quick update:

  • I’ve switched from building my own DAL to using nHibernate, a move I’m still coming to terms with, but generally happy with.
  • I’m using the Repository Pattern for data persistence
  • I’m using ASP.NET MVC for the web application layer
  • I’m writing a LOT of unit tests to flesh out the object model design and basic functionality before I even start on the web UI.
  • I’m attempting to combine my DTO layer with my rich object model, which is the focus of this post.

Architecture Overview

I’ve created an abstract EntityBase class that simply has an ID.  So far, 2 classes of objects derive from this – Content classes (e.g. Blog posts, articles, etc.) and non-content objects that still have identities and still need to be persisted (e.g. tags).  In the database, I have a common “Content” table to store the properties common to all content types (e.g. State, , and then I have one-off tables for storing properties specific to derived content objects (e.g. BlogPosts table has columns for Title and Body).

For content classes, I’ve created a ContentBase that derives from EntityBase.  ContentBase adds properties for things like SubmittedBy, SubmittedDate, PublishDate (for delayed publishing), State (e.g. Draft, Published, PendingModeration, etc.), and ContentType (an enum: BlogPost, Article, Picture, etc.).

All of the properties on entities have public getters and setters so they work easily with nHibernate without requiring me to create a custom access method.  Maybe I’m being lazy here, but this really feels like the direction nHibernate pushes you.

my business logic quandary

I’m working through a design issue right now specifically related to these content objects.  I want to be able to enforce rules/workflow around publishing content.  For example, any user could submit an article, but only editors are allowed to actually publish the article.

My question is where does this business logic fit best?  All properties on content objects have public getters and setters, and so by using the IContentRepository<T> interface, any user could set the state of a content item to “Published” and save it to the database.  I need some way to enforce the business rules about who can publish content.

Option 1 – Logic in Repositories

One thought is to encode this logic into the content repositories.  I don’t like that, however, because it shouldn’t be the repositories job to manage permissions.  They should only be concerned with loading and modifying data. 

Option 2 – Logic in Entities

This leads me to the thought that consumers of my “object model” shouldn’t have any access to the repository layer.  Then, they couldn’t directly manipulate the data and circumvent the business rules.  Actually, I think this is a good idea no matter where I store the business logic, unless I can guarantee somehow that no business rules would be violated via direct access to the repositories.

So, if not in the repositories, then maybe I should add this logic into the content entities.  Each entity implements an IsValid() method that gets called before the repositories save data.  I could place my rules checks in this method, but it’s still somewhat messy.  For example, an Article entity knows who authored it, but it shouldn’t care who actually edits/moderates/publishes it.  In order for IsValid() to validate the publishing workflow, I’d have to add a property to content entities to track who published them.  Surely there would be some other such property down the line, and I’d have to add more and more to the entities that they really shouldn’t be concerned with.

Option 3 – Introduce Services

Another option is to introduce an IPublisherService interface that handles publishing workflow.  The usage would look something like this:

   IUser author = UserRepository.GetByName("Jason");
DateTime dateToPublish = DateTime.Parse("4/5/2009 1:34:00");
BlogRepository.Create("post title", "post body", author, dateToPublish);

I have a few problems with this approach too.  Notice that the UserRepository is still public here.  That violates the “rule” I established above while discussing Option 2.  So, do I establish an IUserService that essentially duplicates the IUserRepository interface along with some extra business logic? 

Also, what about Tagging content?  If I go with the “directly access entity properties” approach of Option 1, then I can simply add tags to the Tags collection member of my objects and when the object is persisted, then the Tag objects are automatically generated by nHibernate and saved/mapped in the appropriate tables.

Option 4 – Hybrid with Credential Validation in Repository Calls / Entity Logic

Part of the problem with my current repository interface is that it has no concept of who is actually calling it.  I could take a dependency on the ASP.NET membership stuff and just assume that I can find out from inside the repository, but I don’t like being tightly coupled to dependencies like that.  So, I will probably stick with my IUser interface which abstracts user identities out a bit.

So, if I pass in user credentials via IUser with each repository call, and repositories know how to ask a central permissions service whether or not the given user has privileges to do the thing they’re asking about, then I can still embed the permissions part of the business logic into the repositories.  This feels natural somehow, and I’m inclined to go with it.  The remaining business logic around entities can go into the IsValid() method on the entities.

So, each repository call would look something like this:

   IUser author = UserRepository.GetByName("Jason");
DateTime dateToPublish = DateTime.Parse("4/5/2009 1:34:00");
var newPost = BlogRepository.Create("post title", "post body", author, dateToPublish); // author is checked for permissions to create blog posts
Publisher.SubmitForPublishing(newPost, author);

Then, some editor user comes along and tries:

   IUser moderator = UserRepository.GetByName(currentUserName);
publisher.Publish(newPost, moderator);


   publisher.Reject(newPost, moderator);

... but if the post author tried it:

   publisher.Reject(newPost, author);

... then they'd get a security exception (which would also be logged in the audit log).

I’m currently leaning towards this approach. It combines the simplicity of embedding business logic into the entities without the requiring me to hide the repositories or (heaven forbid) split repositories into reader/writer interfaces.

I’m still not quite sure what to do about public setters for things like the Tags collection on entities… but I think that’s a topic for my next post.

Thoughts?  Am I barking up the wrong tree entirely?

Tuesday, March 31, 2009

9 Reasons Why Your Mileage is Worse in Cold Weather

I’ve noticed that I get about 20-25% less average fuel efficiency in cold weather in my 2006 Honda Civic Hybrid, and not being an “engine guy”, I really never understood why.  My cold weather average is between 43-46mpg, and in warmer weather I average around 54-56mpg. 

I was shown this article today that offers 9 reasons for why this is true.  Several of the points have source links helping validate the claims.  I’ll definitely have to do some more research, but I wanted to pass it along in case you’re wondering the same thing.

Happy hypermiling!

Read:  9 reasons why your winter fuel economy bites (metrompg.com)

Wednesday, January 21, 2009

12 Tips for New Twitter Users

I’ve been using Twitter for a while, and I thought I’d pass along a set of tips that have helped me get more out of it.  Hopefully you’ll find these useful, and if you have additional tips to share, please post comments.

1. Find other people who are talking about things you’re interested inTweetscan lets you search tweets for keywords.  It’s a great way to see who else is talking about topics you’re interested in and to find new people to follow.  You can also set up RSS feeds for your keyword searches!

2. Mr. Tweet – Mr. Tweet will analyze your friends and followers and suggest new people you should be following.  It’s growing quickly and analysis can take a couple of weeks, but it’s well worth joining.  Sign up at MrTweet.com.

3. Focus your tweets to your goals – Think about what you’re really trying to get out of Twitter?  If you’re just looking for someone to chat with, feel free to tweet about your cat, what you’re having for dinner, that jerk in front of you in the checkout line, etc.  You’ll probably bore your friends to death, but go ahead.  On the other hand, if you’re trying to build up a social network around a particular topic or concern, keep your tweets (mostly) related to that topic.  Chances are you’ll end up having more meaningful conversations with other people who care about the topic, and your signal-to-noise ratio will be much higher.

4. Accept replies from non-followers – I’m still surprised this isn’t on by default in Twitter, but it’s critical if you want to grow your network.  With this option turned off, you won’t see tweets from non-followers who randomly reply to you (e.g. if they found you through Tweetscan).  To enable it, go to the settings page in your Twitter profile, and on the Notices tab, select “Show me all @ replies”.

5. Advertise your Twitter account on your blog – Help your blog readers find you by putting a link to your Twitter account at the top of your blog near your RSS feed link.

6. Tweet your blog posts – When you post to your blog, let your community know by tweeting a link to the story.  It’s a great way to drive traffic to your site.

7. Stay connected – If you have a mobile phone with a data plan, consider installing a Twitter client such as TinyTwitter (for Windows Mobile) so you can stay connected to your community while on the go.  If you go for long periods without tweeting, your community will likely stop following you.  Install a desktop client like Twhirl or Tweetdeck to stay connected from your PC – they usually provide much richer interfaces than the Twitter.com website.

8. Retweet great tweets from your friends – When you see an interesting tweet from a friend, pass it along by “retweeting”.  This shows your friends you care and helps spread the word.

9. Shrink URLs – Use a service like TinyUrl to shrink your URLs to fit within the 140 character limit.

10. Share photos with Twitpic – Many Twitter clients have tight integration with Twitpic – a photo sharing service.  Just specify your Twitter account, point to a photo, and you can tweet it easily.  This is a great way to tweet live events in realtime from mobile phones, for example. 

11. Grade yourselfTwitterGrader will analyze your friends and followers and your tweeting habits and assign you a grade from 0 to 100.  You can also use it to find other “top tweeters” in your area.  It’s a fantastic way to connect with other local tweeters!

12. Use a photo in your Twitter profile – Personalize your Twitter profile by uploading a photo.  It doesn’t have to be a photo of you, but make it something that represents who you are.  Feel free to update it from time to time to keep your persona fresh.

Tuesday, December 30, 2008

Moving from ActiveRecord to the Repository Pattern in my CMS

Some of you may know that I’ve been working on a CMS in my spare time.  This is partly so I can have something easy to publish my blog with, but mostly it’s just a learning exercise.  I’ve been working on and off on this thing for over a year.  Along the way, I’ve picked up some new C# skills, ASP.NET, some new SQL skills, learned VS Team Data, etc. Most recently, I’ve been digging into ASP.NET MVC, Test Driven Design, unit testing, dependency injection/inversion of control, and Domain Driven Design. 

So… I’ve decided to commit the ultimate sin and start the project over essentially from scratch, abandoning what I wrote before I learned all this new stuff in hopes that it’ll still save me time in the long run.

This post is a form of “thinking out loud” to help me work through my new design and convince myself it’s oatmeal (“the right thing to do”).  Comments are welcome.  Just don’t tell me I’m crazy; I already know that.

Initial Attempts, and Why I Abandoned Them

My first attempt at building the core CMS classes looked roughly like this (using my Blog model as an example):

BlogPostInfo – Read-only value object / data transfer object that basically represent a row in the database

IBlogDAL – Interface to abstract away my data access layer

BlogProvider – Implements IBlogDAL.  Makes calls to database for CRUD operations via ADO.NET.  Deals only with BlogPostInfo objects

DALFactory – factory class that knows how to instantiate the DAL implementations registered in web.config

BlogPost - “fat” class that has a hard-coded dependency on DALFactory. Adds some static methods for CRUD (passes through to the IBlogDAL implementation), adds setters for properties, adds lazy-loading “smart” properties for related objects.  Typical usage looks like:

BlogPost post = BlogPost.GetPostById(postId);

foreach(Comment comment in post.Comments)

The BlogPost.Comments property doesn’t exist on the thin read-only BlogPostInfo class (no “helper” methods do), only on the BlogPost class.  When the property is accessed, I call through to my Comments static class to a method that returns all comments for the current blog post ID.

This all made for an easy-to-use BLL, but I didn’t find it very easy to update or to unit test.  If I wanted to add or remove a method on the domain model, I may have to update as many as N different layers: the IXXXDAL interface, the DAL implementation, database artifacts (sprocs & tables, maybe also views), the XXXInfo read-only entity, and finally the BLL class in the “fat” object model.

Adding to the downsides of this approach, my BLL classes hard-coded their dependency on a DAL Factory class (a la the Pet Shop 4.0 example).  So each BLL class has a private “DAL” member variable that calls the factory, which in turn reads web.config to determine what class implements the requested data access type.  So, if I wanted to unit test my BlogPost objects without standing up a database, I’d have to replace web.config to point to a mock IBlogDAL.

So my goal in redesigning my core infrastructure was to reduce the number of layers to the bare minimum and to make my code easier to unit test.  I’m afraid this will come at the cost of usability of my core services components, but I’m willing to give it a try in the name of quality.

Domain Model

Taking a loose interpolation of a DDD approach, I’ll start by describing my new domain model.  Let’s look at the Blog model as an example:

I’m going with a thin read/write class for each of my domain entities.  For example:

public class BlogPost : BaseEntity
public string Title { get; set; }
public string Author { get; set; }
public string Markup { get; set; }
public DateTime PostedDate { get; set; }

BaseEntity is a simple base class that includes an ID parameter and requires derived classes to specify a “type” ID.  The type ID is an enumeration that includes things like BlogPosts, Articles, Photos, etc. I use it in tagging and commenting so I can implement generic tagging and commenting tables (tags and comments are linked to instances of objects by their ID and by their type ID in a SQL table.

I’ve decided against using interfaces to abstract away implementations of the domain model in order to keep things simple.  I’d love your opinions on this.  Personally, I don’t yet see the point of implementing an interface on a simple class that has no methods.


I will use interfaces for each type of repository in order to make unit testing my code easier.  I’m going to keep repositories as simple as possible, ideally sticking to just CRUD operations for each object type.  For example:

public interface IBlogPostRepository
BlogPost GetBlogPostById(int id);
IList<BlogPost> GetAllBlogPosts();
bool Save(BlogPost post);
bool Delete(BlogPost post);

This feels pretty “raw” to me as far as something I’d consume from a controller (as opposed to the “fat” BLL approach above).  For example, my BlogPost BLL object had a method to approve posts (BlogPost.Approve()).  Using this approach, I’d have to do the following:

BlogPost post = blogRepository.GetBlogPostById(postId);
post.Approved = true;


In the old model, I would have done:

BlogPost post = BlogPost.GetBlogPostById(postId);

Where do Services Fit In?

Here’s where I’m still waffling on the design. Should an IRepository include non-CRUD methods like encoding user-generated input from a forum post into HTML safe for rendering?  Should a repository implement validation functionality, such as making sure a BlogPost object has the author field specified before attempting to update it in the data store?

Essentially… where does the business logic fit in?  I’m thinking the answer is in services.

It seems to me that for some things, a repository should depend on business logic services such as validators.  In other cases, services should depend on repositories.  I’d love your thoughts on this.  Also, where do you draw the line and decide to implement a method as a service rather than on a repository?  For example, should ModerateComment be a method on an ICommentRepository or in an ICommentService?

And Another thing…

I really liked 1 thing about my ActiveRecord approach.  Classes in the BLL had behaviors, such as being commentable or taggable.  I had implemented these behaviors by specifying interfaces, ICommentable and ITaggable.  If a class inherited those interfaces, then a user could apply comments to tags to them.  It was nice because the behavior for the business object was encapsulated with the class. 

If I understand the repository/service approach correctly (and I most certainly do NOT), then I think I’d have to implement an ICommentService that knows how to apply comments to all domain entities.  I’m not comfortable with this – now the business objects (or what’s left of the concept of a business object) don’t know what they can or can’t do. 

For example, I’m thinking of ICommentService having the following methods:

public interface ICommentService
   AddCommentToObject(CommentInfo comment, BaseEntity target);
   ModerateComment(CommentInfo comment, bool approved);
   IsOkToPostComments(BaseEntity target, );

But what if I had some business objects that shouldn’t be commentable?  I suppose some of my options would be:

1. Derive a class CommentableEntity from BaseEntity and then derive any commentable classes from CommentableEntity.  If an object isn’t commentable, then it would just derive from BaseEntity instead of CommentableEntity.  C# doesn’t allow multiple inheritance though, so I couldn’t follow this approach for ITaggable or any other additional behaviors.

2. Include AllowComments and AllowTags properties on the BaseEntity class so al derived classes can decide whether or not they want to allow comments.

3. Stick with the ICommentable and ITaggable methods, but how do those fit in when your domain entities are super thin containers around properties only (at least that’s what all the examples I’ve seen indicate to do).

The Wrap Up

I’ll stop babbling for now – hopefully someone will read this and offer some sage wisdom to set me on the right track.  Thanks in advance!

Thursday, October 2, 2008

A Prius With Something To Say

A co-worker of mine got his new vanity plate for his Prius today.  I’m actually shocked they gave it to him, but then again, someone in the DMV would have to be a) conscious, and b) care to deny this gem:.  If you can’t figure it out, post a comment ;)


Tuesday, September 23, 2008

Debunking Nostradamus Books for Sport

I randomly decided I wanted to learn more about Nostradamus and his alleged predictions for the end of the world.  Why not?  Between increasing tensions in the Middle East, the crazy worldwide financial crisis, and hurricanes increasing in strength and destruction each year, I figured it would at least be interesting.

So I popped open a web search for “Nostradamus predictions”, and the 2nd hit looked intriguing:

From NostradamusOnline.com:

In May 2005, the Italian National Library in Rome made an amazing discovery. Buried in their archives was an unknown manuscript written by the famed prophet Michel de Nostradame, or Nostradamus (1503-1566). This manuscript was handed down to his son and later donated to Pope Urban VIII. It did not surface again until now, almost four hundred years later.

Next, I wanted to see if the book was available on Amazon.  It is, and it got some fairly horrible reviews – several even alleging plagiarism of other authors!

A bit more searching on Amazon.com turned up this gem:

In 1994 members of the Italian National Library in Rome found buried in their archives an unknown and unpublished manuscript consisting of 80 mysterious paintings by the famed prophet Michel de Nostradamus (1503-1566). This manuscript, handed down to the prophet's son and later donated by him to Pope Urban VIII, confirms the hidden chronology of Nostradamus's quatrains discovered by the well-known Nostradamus scholar Ottavio Cesare Ramotti.
In both the paintings and accompanying quatrains within, Nostradamus correctly predicts such key events as the Nazi Blitzkrieg, the assassination attempt on Pope John Paul II, the burning of the oil wells of Kuwait by Iraq, and Boris Yeltsin's rise to power. Knowing the power that his prophecies contained, and wary of this power falling into the wrong hands, Nostradamus scrambled both the meaning and the order of his quatrains so that humanity would not be able to use them until it had become sophisticated enough to decode them. That time is now. Using a software program he created, Ramotti has finally cracked the code and produced a book that is required reading for those who want to know what the next millennium has in store.

Notice a strange similarity between the descriptions?

So which is it?  Did the Italian National Library discover this manuscript in 1994 or 2005?  Is Boris Yeltsin the great Russian trouble-maker, or is it perhaps Putin?  It’s pretty easy to read whatever you want into the quatrains depending on the current times.  For example, several books interpreted that “Mabus” (supposedly Mr. Bush) would meet an untimely end in 2007, and we know that didn’t happen. 

It’s too bad this whole Nostradamus thing reeks of Vegas-like levels fakery and deception.  It’s even worse that the whole rigmarole is recycled every couple of decades.  Worse yet, apparently some of us fall for it.