Monitoring Data Model Changes

Have you ever wanted to view how a database model has evolved over time?  It’s not that hard to setup an infrastructure for supporting this.  In this series of articles, I’ll go over the process for this.  For a jump start on the whole process, I’ve provided a PowerPoint and a paper that I wrote on the entire process.  In this series, I’ll provide more of a step-by-step, since the end-to-end solution encompasses more components than can easily be digested in a single sitting. 

Approach

  1. Create a schema to support capturing all entities and their relations along with a change log table with a revision id as the root key for all related tables.
  2. Create a stored procedure to insert from the system tables and views into the appropriate tables
  3. Add a DDL trigger to invoke the stored procedure to insert the snapshot
  4. Create a services layer to query the model
  5. Create a Click-once application to query the model and view in either replay or real-time mode.

Schema

Below is the schema I chose to support the required data. 

image

Here is a summary of the key tables and their purposes:

ChangeLog – Contains details about the database change – originating user, date/time, object changed, command issued, and change log version ID. Each entry in the change log correlates to a complete snapshot of a database schema for a point in time.

SchemaTable – Contains detail about the table changed including the affected schema for a particular database version.

TableRelation – Identifies the relationships between different tables for a particular database version.

RelationColumnPair – Identifies the table columns that are used to join two tables together in a relationship.

That’s all I have time for today – got to catch the train.  For the next post, I plan to get more into the guts of the stored procedures that actually populate this data and I’ll provide a snapshot of what the functionality looks like.  Hopefully, I’ll have a demo version up and running on my server in the next couple of day so you can see the interface.  I’m busy setting up an ISA Server to allow me to take advantage of my 8 different IP addresses to facilitate having more than 1 web server tied to my leithisers.com domain.

Posted in Uncategorized | Leave a comment

Welcome Back, again…

After a false start a couple months back, I’m determined to get more involved in the community and start blogging daily and not let all the distractions get me away from this.  My background includes over 22 years in software development.  I am currently working for Microsoft as a Sr. Consultant working in the Department of Defense sector.  My primary areas of focus include .NET, Team System and Team Foundation Server, SQL Server and SOA and interactions with SharePoint, BizTalk, Windows Workflow, and Windows Communication Foundations.  I am also pursuing a PhD from Auburn University.  My hope is to start posting whatever I can from my work experience and academic experience.  My area of research is software automation, particularly as it is enabled through well-defined relational repositories.

Here is what I plan to post over the next few days to get things rolling

1) The Data Model Monitor – This is a tool that not only audits SQL Server database changes using a Database Definition Language trigger, but includes a schema and supporting stored procedures to capture a snapshot of a database model after each change.  Along with this, there is a replay function that allows a user to see how a database model has evolved.  The tool also supports

2) Working with Hyper-V to setup a virtual lab – Experiences from the field.  I’ll be sharing with you what I am learning as I am endeavoring to setup a virtual lab of several machines running under Hyper-V. 

3) Data and Business Layer generator – This tool generates data and business objects to help automate these aspects of an application and de-couple these layers from the UI.  I’ll be discussing the Microsoft Enterprise library as part of this exercise.

4) Basic techniques for finding bottlenecks with SQL Server.  I’ll discuss how to use the trace output, store the data in a database and then use queries to identify quickly how to find the low hanging fruit for a poorly performing database to cleanup queries and setup appropriate indexes.

5) Setting up ADFS with SharePoint and providing a SOA interface point through an Enterprise Bus.

6) Using Team Foundation Server work items with Microsoft Project integration in order to implement time tracking for a development project.

This will keep me busy for a while…

“Microsoft Bob”

Posted in Uncategorized | Leave a comment