I’ve been meaning to write a post describing this simple concept. The main idea is to create a pattern for loading “bits” of a webpage asynchronously, particularly MVC partial views. I find this technique useful because I tend abuse partial views to maximize code reusability. I’ll be explaining this simplified example to describe how I do it.

Let’s start with the basic, imagine you have a simple MVC application with one controller and four actions, like so:

public class AsyncController : Controller

{

    [HttpGet]

    public ViewResult List()

    {

        var users = UserService.GetUsers();

        return View(users);

    }

    [HttpGet]

    public ViewResult Details(int Id)

    {

        var users = UserService.GetUser(Id);

        return View(users);

    }

    [HttpGet]

    public ViewResult Edit(int Id)

    {

        var users = UserService.GetUser(Id);

        return View(users);

    }   

    [HttpPost]

    public ViewResult Edit(UserModel model)

    {

        if (ModelState.IsValid)

        {

            UserService.UpdateUser(model);

            ViewBag.Result = “OK”;

        }else

        {

            var res = ModelState.Values;

            ViewBag.Result = “NOK”;

        }

        return View(model);

    }

}

public class UserModel

{

    public int Id { get; set; }

    [Required(ErrorMessage = “Field can’t be empty”)]

    public string Name { get; set; }

    [DataType(DataType.EmailAddress, ErrorMessage = “E-mail is not valid”)]

    public string Email { get; set; }

}

So, pretty standard. For clarification I will start with the List view.

@model IEnumerable<Demo.AsyncControls.Models.UserModel>

@{

    ViewBag.Title = “List”;

    Layout = “~/Views/Shared/_Layout.cshtml”;

}

<div>

    <h2>List</h2>

    <table class=”table”>

        <tr>

            <th>@Html.DisplayNameFor(model => model.Name)</th>

            <th>@Html.DisplayNameFor(model => model.Email)</th>

            <th></th>

        </tr>

        @foreach (var item in Model)

        {

            <tr>

                <td>@Html.DisplayFor(modelItem => item.Name)</td>

                <td>@Html.DisplayFor(modelItem => item.Email)</td>

                <td>                   

                    @Html.ActionLink(“edit”, “Edit”, new { Id = item.Id}) |

                    @Html.ActionLink(“details”, “Details”, new { Id = item.Id })

                </td>

            </tr>

        }

    </table>

</div>

This generates something similar to this.

clip_image001

So far I’ve described the basics of an MVC application with 1 controller, 3 views and 4 actions.

The challenge now is to make everything load asynchronously. Quoting David Wheeler: “All problems in computer science can be solved by another level of indirection”, so my solution is to do just that. The main idea is to have all the relevant dynamic bits of information, in this case the list, outside of the page and into a partial view.

So the view would look this

@using Demo.AsyncControls.Helpers

@{

    ViewBag.Title = “List”;

    Layout = “~/Views/Shared/_Layout.cshtml”;

}

<div id=”list” class=”loader”>

<script>

    $(document).ready(function () {

        window.async.getFromController(‘/Async/AsyncList’, ‘list’, null);

    });

</script>

This javascript function will do the heavy lifting and load the partial view (explained ahead). The partial view looks like this:

@using Demo.AsyncControls.Helpers

@model IEnumerable<Demo.AsyncControls.Models.UserModel>

<div id=”list”>

    <h2>List</h2>  

    <table class=”table”>

        <tr>

            <th>@Html.DisplayNameFor(model => model.Name)</th>

            <th>@Html.DisplayNameFor(model => model.Email)</th>

            <th></th>

        </tr>

        @foreach (var item in Model)

        {

            <tr>

                <td>@Html.DisplayFor(modelItem => item.Name)</td>

                <td>@Html.DisplayFor(modelItem => item.Email)</td>

                <td>

                    @Html.ActionLink(“edit”, “Edit”, new { Id = item.Id}) |

                    @Html.ActionLink(“details”, “Details”, new { Id = item.Id })

                </td>

            </tr>

        }       

    </table>

</div>

Notice that the model went into the partial view.

The controller now looks like this:

[HttpGet]

public ViewResult List()

{           

    return View();

}

[HttpGet]

public PartialViewResult AsyncList()

{

    var users = UserService.GetUsers();

    return PartialView(users);

}

Let’s take a closer look at the javascript that makes this possible.

(function () {

    window.async = {};

    window.async.disabledClass = ‘disabled’;

    window.async.getFromController = function (url, name, model) {

        var obj = $(‘#’ + name);

        ajaxCall(url, obj, model, ‘GET’);

    }

    window.async.postToController = function (url, name) {

        var obj = $(‘#’ + name);

        var form = obj.find(‘form’);

        form.submit(function (event) {

            var model = $(this).serializeArray();

            ajaxCall(url, obj, model, ‘POST’);

            event.preventDefault();

        });

    }

    function ajaxCall(url, obj, model, verb) {

        $.ajax({

            cache: false,

            type: verb,

            url: url,

            data: model,

            beforeSend: function () {

                obj.addClass(window.async.disabledClass);

            },

            success: function (data) {

                obj.replaceWith(data);               

            },

            error: function (xhr, ajaxOptions, thrownError) {

                //log this information or show it…

                obj.replaceWith(xhr.responseText);

            },

            complete: function () {

                obj.removeClass(window.async.disabledClass);

                updateURL(url, model, verb);

            }

        });

    }

    function updateURL(action, model, verb) {

        if (history.pushState) {

            var split = action.split(‘/’);

            split[split.length – 1] = split[split.length – 1].replace(‘Async’, );

            action = split.join(‘/’);

            if (model != null && verb != ‘POST’)

                action = action + ‘?’ + $.param(model);

            var newurl = window.location.protocol + ‘//’ + window.location.host;

            newurl = newurl + action;

            window.history.pushState({ path: newurl }, , newurl);

        }

    }

})();

The first two functions (getFromController and postToController) handle the information and GET or POST to a controller. POST also serializes the form element before sending to the controller (var model = $(this).serializeArray()), so this code expects that you are sending information from within a form element and that form exists within the indicated element.

The function (updateURL) manages the URL so you can use URL sharing (without this feature the actions would not cause the URL to change and your users could not share a link). This code expects that all partials views using this technique are prefixed with “Async” and named with the name of the corresponding View, in this example “AsyncList”. You will notice that you fetch the information in the Async View and the URL is changed (without post back) to the corresponding view.

To make things a bit simpler, let’s use an HTML helper to hide the script tag in the List View. Let’s go ahead and do a couple of more options.

public class AsyncLoader

{

    public static MvcHtmlString Render(string Controler, string Action, string PlaceHolder)

    {

        return AsyncLoader.Render(Controler, Action, PlaceHolder, null);

    }

    public static MvcHtmlString Render(string Controler, string Action, string PlaceHolder, dynamic Model)

    {

        var model = ObjectToJason(Model);

        var html = $@” $(document).ready(function(){{ window.async.getFromController(‘/{Controler}/{Action}‘, ‘{PlaceHolder}‘, {model});}}); “;

        return MvcHtmlString.Create(html);

    }

    public static MvcHtmlString Action(string Controler, string Action, string PlaceHolder, string Link)

    {

        return AsyncLoader.Action(Controler, Action, PlaceHolder, Link, null);

    }

    public static MvcHtmlString Action(string Controler, string Action, string PlaceHolder, string Link, dynamic Model)

    {

        var model = ObjectToJason(Model);

        var html = $@” $(‘#{Link}‘).click(function(){{ window.async.getFromController(‘/{Controler}/{Action}‘, ‘{PlaceHolder}‘, {model}); }}); “;

        return MvcHtmlString.Create(html);

    }

    public static MvcHtmlString Post(string Controler, string Action, string PlaceHolder)

    {

        var html = $@” window.async.postToController(‘/{Controler}/{Action}‘, ‘{PlaceHolder}‘); “;

        return MvcHtmlString.Create(html);

    }

}

There are 3 operations here, Render is used lo load a control. Action is used to load a control on click and Post is used to post the form into the controller. The “PlaceHolder” parameter represents the HTML element that will be replaced with the loaded content.

The missing serialization code is shown next:

private static JsonSerializerSettings CreateSerializerSettings()

{

        var serializerSettings = new Newtonsoft.Json.JsonSerializerSettings();

        serializerSettings.MaxDepth = 5;

        serializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore;

        serializerSettings.MissingMemberHandling = Newtonsoft.Json.MissingMemberHandling.Ignore;

        serializerSettings.NullValueHandling = Newtonsoft.Json.NullValueHandling.Ignore;

        serializerSettings.ObjectCreationHandling = Newtonsoft.Json.ObjectCreationHandling.Reuse;

        serializerSettings.DefaultValueHandling = Newtonsoft.Json.DefaultValueHandling.Ignore;

        return serializerSettings;

}

private static string ObjectToJason(object obj)

{

        var json = “null”;

        if (obj == null)

            return json;

        try

        {

            var serializerSettings = CreateSerializerSettings();

            json = Newtonsoft.Json.JsonConvert.SerializeObject(obj, Newtonsoft.Json.Formatting.None, serializerSettings);

        }

        catch

        {

            //log this

        }

        return json;

}

Now we can rewrite the List as such:

@using Demo.AsyncControls.Helpers

@{

    ViewBag.Title = “List”;

    Layout = “~/Views/Shared/_Layout.cshtml”;

}

<div id=”list” class=”loader”>

    <h4>Loading…</h4>

    @AsyncLoader.Render(“Async”, “AsyncList”, “list”)

</div>

We must remember that the loading script will replace the content into the indicated list. So this code, after loaded, will completely replace the content with the returned AsyncList.

So let’s see the final Views and Partial Views.

Details (View and Partial View):

@using Demo.AsyncControls.Helpers

@model int

@{

    ViewBag.Title = “Details”;

    Layout = “~/Views/Shared/_Layout.cshtml”;

}

<div id=”list” class=”loader”>

    @AsyncLoader.Render(“Async”, “AsyncDetails”, “list”, new { Id = Model })

</div>

@using Demo.AsyncControls.Helpers

@model Demo.AsyncControls.Models.UserModel

 

<div id=”details”>

    <h2>Details</h2>

    <h4>UserModel</h4>

    <hr />

    <dl class=”dl-horizontal”>

        <dt>@Html.DisplayNameFor(model => model.Name)</dt>

        <dd>@Html.DisplayFor(model => model.Name)</dd>

        <dt>@Html.DisplayNameFor(model => model.Email)</dt>

        <dd>@Html.DisplayFor(model => model.Email)</dd>

    </dl>

    <a id=”link-details” href=”#” class=”btn btn-primary”>edit</a>

    @AsyncLoader.Action(“Async”, “AsyncEdit”, “details”, “link-details”, new { Id = Model.Id })

    <a id=”link-list” href=”#” class=”btn btn-primary”>list</a>

    @AsyncLoader.Action(“Async”, “AsyncList”, “details”, “link-list”)      

</div>

Edit (View and Partial View):

@using Demo.AsyncControls.Helpers

@model int

@{

    ViewBag.Title = “Edit”;

    Layout = “~/Views/Shared/_Layout.cshtml”;

}

<div id=”list” class=”loader”>

    @AsyncLoader.Render(“Async”, “AsyncEdit”, “list”, new { Id = Model })

</div>

@using Demo.AsyncControls.Helpers

@model Demo.AsyncControls.Models.UserModel

<div id=”edit”>

    <h2>Edit</h2>  

    @using (Html.BeginForm())

    {

        @Html.AntiForgeryToken()

        <div class=”form-horizontal”>

            <h4>UserModel</h4>

            <hr />

            @Html.ValidationSummary(true, “”, new { @class = “text-danger” })

            @Html.HiddenFor(model => model.Id)

            <div class=”form-group”>

                @Html.LabelFor(model => model.Name, htmlAttributes: new { @class = “control-label col-md-2” })

                <div class=”col-md-10″>

                    @Html.EditorFor(model => model.Name, new { htmlAttributes = new { @class = “form-control” } })

                    @Html.ValidationMessageFor(model => model.Name, “”, new { @class = “text-danger” })

                </div>

            </div>

            <div class=”form-group”>

                @Html.LabelFor(model => model.Email, htmlAttributes: new { @class = “control-label col-md-2” })

                <div class=”col-md-10″>

                    @Html.EditorFor(model => model.Email, new { htmlAttributes = new { @class = “form-control” } })

                    @Html.ValidationMessageFor(model => model.Email, “”, new { @class = “text-danger” })

                </div>

            </div>

            <div class=”form-group”>

                <div class=”col-md-offset-2 col-md-10″>

                    <input type=”submit” value=”Save” class=”btn btn-default” />

                    @if (ViewBag.Result != null)

                    {

                        @ViewBag.Result

                    }

                </div>               

            </div>

        </div>

        <a id=”link-details” href=”#” class=”btn btn-primary”>details</a>

        @AsyncLoader.Action(“Async”, “AsyncDetails”, “edit”, “link-details”, new { Id = Model.Id })

        <a id=”link-list” href=”#” class=”btn btn-primary”>list</a>

        @AsyncLoader.Action(“Async”, “AsyncList”, “edit”, “link-list”)

        @AsyncLoader.Post(“Async”, “AsyncEdit”, “edit”)      

    }       

</div>

And now, the controller:

public class AsyncController : Controller

{

    [HttpGet]

    public ViewResult List()

    {           

        return View();

    }

    [HttpGet]

    public ViewResult Details(int Id)

    {

        return View(Id);

    }

    [HttpGet]

    public ViewResult Edit(int Id)

    {

        return View(Id);

    }

 

    [HttpGet]

    public PartialViewResult AsyncList()

    {

        var users = UserService.GetUsers();

        return PartialView(users);

    }

    [HttpGet]

    public PartialViewResult AsyncDetails(int Id)

    {

        var user = UserService.GetUserById(Id);

        return PartialView(user);

    }

    [HttpGet]

    public PartialViewResult AsyncEdit(int Id)

    {

        var user = UserService.GetUserById(Id);

        return PartialView(user);

    }

    [HttpPost]

    public PartialViewResult AsyncEdit(UserModel model)

    {

        if (ModelState.IsValid)

        {

            UserService.UpdateUser(model);

            ViewBag.Result = “OK”;

        }

        else

        {

            var res = ModelState.Values;

            ViewBag.Result = “NOK”;

        }

        return PartialView(model);

    }

}

And that’s it, with this simple example you can load your partial views asynchronously. If your main menu also uses this technique (and extending this a little bit) you will be able to use MVC as a SPA with minimum adaptation of existing application (assuming they are straight forward).

That’s it, happy coding…

The code is here https://gofile.io/?c=53jJHY

This time I will be posting another “recipe”. This one is to help you log application information, this article is a continuation of my previous article Enterprise Library Logging Application Block. If you haven’t read it, consider reading it now. It basically describes the setup mechanism necessary to set up your logging using the Microsoft Enterprise Library. It’s also part of my Enterprise Library (https://srramalho.wordpress.com/category/enterprise-library/) series of posts.

Let’s dive right into it. First step, register the Logging block on your Global.asax (for ASP.NET applications)

public class MvcApplication : System.Web.HttpApplication

{

    protected void Application_Start()

    {

//(…)

       EnterpriseConfig.RegisterEnterprise(); //Setups Enterprise Library

//(…)

    }

}

public class EnterpriseConfig

{

    public static void RegisterEnterprise()

    {

        //logging

        DatabaseFactory.SetDatabaseProviderFactory(new DatabaseProviderFactory());

        Logger.SetLogWriter(new LogWriterFactory().Create());           

        //(…)

    }

}

In this example I’m logging all information to a database. But you can send the information to the system event viewer, email, file or whatever.

So now, along with what’s configured in your web.config, you have the basics of your logging.

clip_image002[5]

This is my setup. There are 4 categories: General (the default), Error, Warning and Info. The “targets” and “Formatters” are of no importance right now. All you need to do is to configure the corresponding categories in your application. The easier way to do that is to create to following enumerates:

public enum LoggingCategories

{

    General,

    Error,

    Warning,

    Info,

}

public enum LoggingPriorities

{

    Critical = 1,

    High = 10,

    Normal = 100,

    Low = 1000

}

Do you see where this is going? The idea is to prevent the programmer to write any hard codded strings by forcing him to use enumerates instead.

Before showing you the code, there are a couple of main “features” I want to accomplish. The first, and more important, is to serialize exceptions in my logger. Not just the stack trace but all the properties of the exception. The second is the ability to create anonymous classes with properties to log.

int diviser = 0;

int dividend = 4;

try

{

    float result = dividend / diviser;

}

catch(Exception exception)

{

    var info = new { Dividend = dividend, Diviser = diviser };

    test.Error(exception, “division failed”, info);

}

Message: division failed

Timestamp: 28/05/2017 17:43:44

Category: Error

Priority: 1

Severity: Error

Title: Application Error Message

App Domain: UnitTestAdapter: Running test

Extended Properties:

Dividend 4

Diviser 0

Exception(JSON) – {

  “ClassName”: “System.DivideByZeroException”,

  “Message”: “Attempted to divide by zero.”,

  “Data”: null,

  “InnerException”: null,

  “HelpURL”: null,

  “StackTraceString”: “at Test.Logger.LoggerTest.write_to_log_debug() in LoggerTest.cs:line 46”,

  “RemoteStackTraceString”: null,

  “RemoteStackIndex”: 0,

  “ExceptionMethod”: “8\nwrite_to_log_debug\n

Test.Logger, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null\nTest.Logger.LoggerTest\n

Void write_to_log_debug()”,

  “HResult”: -2147352558,

  “Source”: “LCG.Test.Logger”,

  “WatsonBuckets”: null

}

Note that the exception is serialized in JSON. I chose this format because I’m already using the Newtonsoft.Json library.

Here is a cool way to log the client side exceptions. Including all server and session variables and the information passed by the ajax.

public class ErrorController : Controller

{

    [HttpPost]

    public ActionResult Log()

    {      

        string forms = string.Empty;

        try { forms = Logging.ObjectToJason(Request.Form); }

        catch { }

 

        var serverVariables = new Dictionary<string, string>();

        try

        {           

            foreach (string serverVariable in Request.ServerVariables)

            {

                serverVariables.Add(serverVariable, Request[serverVariable]);

            }

        }

        catch { }

 

        var sessionVariables = new Dictionary<string, string>();

        try

        {

            foreach (string sessionVariable in Session.Keys)

            {

                sessionVariables.Add(sessionVariable, Session[sessionVariable].ToString());

            }

        }

        catch { }

        var data = new

        {

            RequestForm = forms,               

            ServerVariables = Logging.ObjectToJason(serverVariables),

            SessionVariables = Logging.ObjectToJason(sessionVariables),

        };

           

        var ex = new Exception(“Client Side Error”);

        Logging logger = new Logging();           

        logger.Error(ex, ex.Message, data);

        return Json(Globals.Ok);

    }

}

And this is the client code:

window.errorHandler = function (xhr, ajaxOptions, thrownError) {

    var html = $(xhr.responseText).text();

    var senddata = {       

        responseText: html,

        readyState: xhr.readyState,

        status: xhr.status,

        error: thrownError

    };

   

    $.ajax({

        url: “Error/Log”,

        type: “POST”,

        data: senddata,

        success: function (result) {

             console.log(‘Error logged’);

             console.info(result);

        },

        error: function (xhr, ajaxOptions, thrownError) {

            console.log(‘Error not logged’);

            console.log(xhr.responseText);

            console.info(xhr);

        }

    });   

}

This will log a nice amount of information (I will show some of them):

ServerVariables {

(…)

  “AUTH_TYPE”: “Forms”,

  “LOGON_USER”: “srramalho “,

  “CONTENT_TYPE”: “application/x-www-form-urlencoded; charset=UTF-8”,

  “GATEWAY_INTERFACE”: “CGI/1.1”,

  “INSTANCE_META_PATH”: “/LM/W3SVC/42”,

  “PATH_INFO”: “/Error/Log”,

  “QUERY_STRING”: “”,

  “REQUEST_METHOD”: “POST”,

  “SCRIPT_NAME”: “/Error/Log”,

  “SERVER_NAME”: “localhost”,

  “SERVER_PORT”: “17488”,

  “SERVER_PORT_SECURE”: “0”,

  “SERVER_PROTOCOL”: “HTTP/1.1”,

  “SERVER_SOFTWARE”: “Microsoft-IIS/10.0”,

  “URL”: “/Error/Log”,

  “HTTP_CACHE_CONTROL”: “no-cache”,

  “HTTP_CONNECTION”: “Keep-Alive”,

  “HTTP_CONTENT_LENGTH”: “24”,

  “HTTP_CONTENT_TYPE”: “application/x-www-form-urlencoded;

charset=UTF-8″,

  “HTTP_ACCEPT”: “*/*”,

  “HTTP_ACCEPT_ENCODING”: “gzip, deflate”,

(…)

}

And now, the actual logger code:

public interface ILogger

{

    void Debug(string message);

    void Debug(string message, dynamic variables);

 

    void Info(string message);

    void Info(string message, dynamic variables);

 

    void Warning(string message);

    void Warning(string message, dynamic variables);

    void Warning(Exception ex, string message);

    void Warning(Exception ex, string message, dynamic variables);

 

    void Error(Exception ex, string message);      

    void Error(Exception ex, string message, dynamic variables);               

}

public class Logging : ILogger

{

    LogEntry _entry = new LogEntry();

 

    private static JsonSerializerSettings CreateSerializerSettings()

    {

        var serializerSettings = new Newtonsoft.Json.JsonSerializerSettings();

        serializerSettings.MaxDepth = 5;

        serializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore;

        serializerSettings.MissingMemberHandling = Newtonsoft.Json.MissingMemberHandling.Ignore;

        serializerSettings.NullValueHandling = Newtonsoft.Json.NullValueHandling.Ignore;

        serializerSettings.ObjectCreationHandling = Newtonsoft.Json.ObjectCreationHandling.Reuse;

        serializerSettings.DefaultValueHandling = Newtonsoft.Json.DefaultValueHandling.Ignore;

        return serializerSettings;

    }

 

    public static string ObjectToJason(object obj)

    {

        var json = string.Empty;

        if (obj == null)

            return json;

        try

        {

            var serializerSettings = CreateSerializerSettings();

            json = Newtonsoft.Json.JsonConvert.SerializeObject(obj, Newtonsoft.Json.Formatting.Indented, serializerSettings);

        }

        catch (Exception ex)

        {

            json = “Internal error in logger transformation to json:” + ex.Message;

        }

        return json;

    }

 

    public void Debug(string message)

    {

        #if(DEBUG)

        Debug(message, null);           

        #endif

    }

       

    public void Debug(string message, dynamic variables)

    {

        #if(DEBUG)          

        SetEntry(message,

            “Application Debug Message”,

            variables,

            null,

            LoggingPriorities.Low,

            TraceEventType.Verbose,

            LoggingCategories.Debug);

 

        Write(_entry, LoggingCategories.Debug);

        #endif

    }

 

    public void Info(string message)

    {

        Info(message, null);

    }

 

    public void Info(string message, dynamic variables)

    {

        SetEntry(message,

            “Application Information Message”,

            variables,

            null,

            LoggingPriorities.Low,

            TraceEventType.Information,

            LoggingCategories.Info);

 

        Write(_entry, LoggingCategories.Info);

    }

 

    public void Warning(string message)

    {

        Warning(null, message, null);

    }

 

    public void Warning(string message, dynamic variables)

    {

        Warning(null, message, null);

    }

 

    public void Warning(Exception ex, string message)

    {

        Warning(ex, message, null);

    }

 

    public void Warning(Exception ex, string message, dynamic variables)

    {

        SetEntry(message,

            “Application Warning Message”,

            variables,

            ex,

            LoggingPriorities.High,

            TraceEventType.Warning,

            LoggingCategories.Warning);

 

        Write(_entry, LoggingCategories.Warning);

    }

 

    public void Error(Exception ex, string message)

    {

        Error(ex, message, null);

    }

 

    public void Error(Exception ex, string message, dynamic variables)

    {

        SetEntry(message,

            “Application Error Message”,

            variables,

            ex,

            LoggingPriorities.Critical,

            TraceEventType.Error,

            LoggingCategories.Error);

 

        Write(_entry, LoggingCategories.Error);

    }

 

    //worker section of the code

    private void SetEntry(string message, string title, dynamic variables, Exception ex, LoggingPriorities priority, TraceEventType severity, LoggingCategories category)

    {

        _entry.Priority = (int)priority;

        _entry.Severity = severity;

        _entry.Categories.Add(category.ToString());

        _entry.Title = title;

        _entry.Message = message;

 

        Dictionary<string, object> extendedProperties = new Dictionary<string, object>();

        if (variables != null)

        {                

            foreach (var prop in variables.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public))

            {

                extendedProperties.Add(prop.Name, prop.GetValue(variables, null));

            }               

        }

        if (ex != null)

        {

            extendedProperties.Add(“Exception(JSON)”, ObjectToJason(ex));

        }

        _entry.ExtendedProperties = extendedProperties;

    }       

 

    private void Write(LogEntry entry, LoggingCategories category)

    {

        Logger.Write(

            _entry,

            category.ToString(),

            _entry.Priority,

            _entry.EventId,

            _entry.Severity,

            _entry.Title,

            _entry.ExtendedProperties

        );

    }           

}

How can I use this code, you ask? Here is a simple example:

[TestClass]

public class LoggerTest

{

    [Import]

    public LoggerTest()

    {

        DatabaseFactory.SetDatabaseProviderFactory(new DatabaseProviderFactory());

        Microsoft.Practices.EnterpriseLibrary.Logging

.Logger.SetLogWriter(new LogWriterFactory().Create());

    }

 

    [TestMethod]

    public void write_to_log_debug()

    {      

        var test = new Logging();

        var variables = new { Time = System.DateTime.Now, Message =

“test”, ID = 1 }

        var ex = new Exception(“test”);

 

        test.Debug(“debug test”);

        test.Debug(“debug test with variables”, variables);

 

        test.Info(“Info test”);

        test.Info(“Info test with variables “, variables);

 

        test.Warning(“Warning test”);

        test.Warning(“Warning test with variables “, variables);

        test.Warning(ex, “Warning test with exception”);

        test.Warning(ex, “Warning test with exception and vars”, variables);

 

        test.Error(ex, “something went wrong”);

        test.Error(ex, “opps”, variables);

        (…)

    }

}

And that’s it. A special thanks to Mr. Cordeiro, a very cool guy, that wrote most of the code and helped the team to standardize all the logging.

See you soon and happy coding.

In my last post “Repository Pattern for EF 6 using detached entities” I’ve discussed a possible implementation for the repository. Today I will include a small development made to facilitate the inclusion of related entities (similar to the EF .Include() method, explained here: https://msdn.microsoft.com/en-us/library/gg696785(v=vs.113).aspx).

The general idea is for developers to include, using intellisense, all related classes they might need in their queries (or, in our pattern, using Get() operations). The scope of include is the context of a repository. Let’s see an example, this is the scenario:

clip_image001[10]

This is pretty self-explanatory example used to store information about the user including some custom unspecified fields.

So, if I get, using our repository, the user, after the finalization, no other related entities exist. Like in this example:

var usersList = new List<User>();

using (var userRepository = new UserRepository())

{

    usersList = userRepository.Get().ToList();

}

//this will cause an exception, because Roles had been disposed

var allroles = usersList.SelectMany(c => c.Roles).ToList();

clip_image003[10]

In many cases it is useful to load the associated entities and the respective relationship. So the proposed solution should work like this:

var usersList = new List<User>();

using (var userRepository = new UserRepository())

{

    userRepository.Include(c => c.Roles);

    usersList = userRepository.Get().ToList();

}

//this will cause an exception

var allroles = usersList.SelectMany(c => c.Roles).ToList();

Or even some more complex scenarios, like this:

userRepository.Include(c => c.UserInformation.CustomFields);

clip_image005[10]

In this case the programmer should be expecting the system to load all associated entities. The underlining SQL statement should be generated a query with an “INNER JOIN” clause between Users, UserInformations and CustomFields.

For clarity I’ll now describe all the code that makes this work, it includes the IDataRepository, IIdentifiableEntity and the IIncludable interfaces. To understand how the first two work please read this: https://srramalho.wordpress.com/2016/09/27/repository-pattern-for-ef-6-using-detached-entities/.

public interface IIncludable<T>

{ 

    void Include<TProperty>(Expression<Func<T, TProperty>> path);

}

public interface IIdentifiableEntity

{

}

public interface IIdentifiableEntity<T>: IIdentifiableEntity

{

    T Id { get; set; }

}

public interface IDataRepository : IDisposable

{

}

public interface IDataRepository<T> : IDisposable, IDataRepository where T: class, IIdentifiableEntity, new()

{                      

    T Add(T entity);

    IEnumerable<T> Add(IEnumerable<T> entities);

    T Remove(T entity);

    IEnumerable<T> Remove(Expression<Func<T, bool>> predicate);       

    IEnumerable<T> Remove(IEnumerable<T> entities);

    T Update(T entity);

    IEnumerable<T> Update(IEnumerable<T> entities);

    IQueryable<T> Get();

    IQueryable<T> Get(Expression<Func<T, bool>> predicate);

    T FirstOrDefault(Expression<Func<T, bool>> predicate);

}

And now the base class that implements all the Interfaces.

public abstract class DataRepositoryBase<T, U> :

    IDisposable,

    IDataRepository<T>,

    IIncludable<T>

    where T : class, IIdentifiableEntity, new()

    where U : DbContext, new()

{

    private List<string> _includes;

    private U _context = null;

    protected U Context

    {

        get { return _context; }

    }     

    public DataRepositoryBase()

    {

        _context = new U();

        _context.Configuration.AutoDetectChangesEnabled = false;    

    }

    public T Add(T entity)

    {

        T result = AddEntity(Context, entity);

        Context.SaveChanges();

        SetState(result, EntityState.Detached);

        return result;

    }

    public virtual T AddEntity(U context, T entity)

    {

        SetState(entity, EntityState.Added);

        return entity;

    }

    public IEnumerable<T> Add(IEnumerable<T> entities)

    {

        var results = AddEntities(Context, entities);

        Context.SaveChanges();

        SetState(results, EntityState.Detached);

        return results;

    }

    protected virtual IEnumerable<T> AddEntities(U context, IEnumerable<T> entities)

    {

        SetState(entities, EntityState.Added);

        return entities;

    }

    public IEnumerable<T> Remove(Expression<Func<T, bool>> predicate)

    {

        var results = RemoveEntities(Context, predicate);

        Context.SaveChanges();

        SetState(results, EntityState.Detached);

        return results;

    }

    protected virtual IEnumerable<T> RemoveEntities(U context, Expression<Func<T, bool>> predicate)

    {

        var entities = context.Set<T>().Where(predicate);

        SetState(entities, EntityState.Deleted);

        return entities;

    }

    public T Remove(T entity)

    {

        var result = RemoveEntity(Context, entity);

        Context.SaveChanges();

        SetState(entity, EntityState.Detached);

        return result;

    }

    protected virtual T RemoveEntity(U entityContext, T entity)

    {

        SetState(entity, EntityState.Deleted);

        return entity;

    }

    public IEnumerable<T> Remove(IEnumerable<T> entities)

    {

        var results = RemoveEntities(Context, entities);

        Context.SaveChanges();

        SetState(entities, EntityState.Detached);

        return results;

    }

    protected virtual IEnumerable<T> RemoveEntities(U context, IEnumerable<T> entities)

    {

        SetState(entities, EntityState.Deleted);

        return entities;

    }

    public T Update(T entity)

    {

        var result = UpdateEntity(Context, entity);

        Context.SaveChanges();

        SetState(entity, EntityState.Detached);

        return result;

    }

    protected virtual T UpdateEntity(U context, T entity)

    {

        SetState(entity, EntityState.Modified);

        return entity;

    }

    public IEnumerable<T> Update(IEnumerable<T> entities)

    {

        var results = UpdateEntities(Context, entities);

        Context.SaveChanges();

        SetState(entities, EntityState.Detached);

        return results;

    }

    protected virtual IEnumerable<T> UpdateEntities(U context, IEnumerable<T> entities)

    {

        SetState(entities, EntityState.Modified);           

        return entities;

    }

    public IQueryable<T> Get()

    {

        var results = GetEntities(Context);

        return results;

    }

    protected virtual IQueryable<T> GetEntities(U context)

    {

        var set = context.Set<T>();

        IQueryable<T> query = set.AsNoTracking();

        if (_includes != null)            

            query = AttachIncludes(set, _includes).AsNoTracking();

        return query;

    }

    public IQueryable<T> Get(Expression<Func<T, bool>> predicate)

    {

        var results = GetEntities(Context, predicate);

        return results;

    }

    protected virtual IQueryable<T> GetEntities(U context, Expression<Func<T, bool>> predicate)

    {

        var set = context.Set<T>();

        IQueryable<T> query = set.AsNoTracking();

        if (_includes != null)

            query = AttachIncludes(set, _includes).AsNoTracking();

        return query.Where(predicate);

    }

    public T FirstOrDefault(Expression<Func<T, bool>> predicate)

    {

        return FirstOrDefaultEntity(_context, predicate);

    }

    protected virtual T FirstOrDefaultEntity(U entityContext, Expression<Func<T, bool>> predicate)

    {

        return entityContext.Set<T>().AsNoTracking().FirstOrDefault(predicate);

    }

    protected void SetState(IEnumerable<T> entities, EntityState state)

    {

        foreach (var entity in entities)

        {

            SetState(entity, state);

        }

    }

    protected void SetState(T entity, EntityState state)

    {

        Context.Entry(entity).State = state;

    }

    public void Include<TProperty>(Expression<Func<T, TProperty>> path)

    {

        var member = path.Body as MemberExpression;

        if (member == null)

            throw new ArgumentException(“Expression is not a member access”, “path”);

        if (_includes == null)

            _includes = new List<string>();

        var name = GetMemberExpression(member);

        _includes.Add(name);

    }

    private string GetMemberExpression(MemberExpression member)

    {

        if (member.Expression is MemberExpression)

            return GetMemberExpression(member.Expression as MemberExpression) + “.” + member.Member.Name;

        else

            return member.Member.Name;

    }

    private DbQuery<T> AttachIncludes(DbQuery<T> set, List<string> includes)

    {

        foreach (var include in includes)

        {

            set = set.Include(include);

        }

        return set;

    }

    #region IDisposable Members

    ~DataRepositoryBase()

    {

        Dispose(false);

    }

    public void Dispose()

    {

        Dispose(true);

        GC.SuppressFinalize(this);

    }

    protected virtual void Dispose(bool disposing)

    {

        if (disposing)

        {

            // free managed resources

            if (_context != null)

            {

                _context.Dispose();

                _context = null;

            }

        }

    }

    #endregion

}

This is quite a lot of code, let’s take a look a smaller subset to make sense of it all.

First, let’s start with Get()

    public IQueryable<T> Get()

    {

        var results = GetEntities(Context);

        return results;

    }

Then, let’s go to GetEntities the overridable method with most of the logic.

    protected virtual IQueryable<T> GetEntities(U context)

    {

        var set = context.Set<T>();

        IQueryable<T> query = set.AsNoTracking();

        if (_includes != null)

            query = AttachIncludes(set, _includes).AsNoTracking();

        return query;

    }

The method detects if there was a call to Include and then includes the appropriate entities calling the AttachIncludes method.

    private DbQuery<T> AttachIncludes(DbQuery<T> set, List<string> includes)

    {

        foreach (var include in includes)

        {

            set = set.Include(include);

        }

        return set;

    }

They Include method builds a list of entities to load in AttachIncludes using Introspection.

    public void Include<TProperty>(Expression<Func<T, TProperty>> path)

    {

        var member = path.Body as MemberExpression;

        if (member == null)

            throw new ArgumentException(“Expression is not a member access”, “path”);

        if (_includes == null)

            _includes = new List<string>();

        var name = GetMemberExpression(member);

        _includes.Add(name);

    }

    private string GetMemberExpression(MemberExpression member)

    {

        if (member.Expression is MemberExpression)

            return GetMemberExpression(member.Expression as MemberExpression) + “.” + member.Member.Name;

        else

            return member.Member.Name;

    }

GetMemberExpression, using recursiveness, concatenates the name of an entity. In our example it will create something like “UsersInformations.CustomFields” for example.

All calls, in the context of a repository, will be affected by this Include, so you must use it with some care.

And, that’s it for now. See you in my next post and happy codding.

Lately I’ve been reading about the changes in EF6 and its change tracker mechanism. I’ve written the following repository pattern (to fit the multi layered software pattern) that, by default, keeps the entities in the detached state (not tracked). This implementation, by design, cannot be used as a Unit of Work. All changes (add, remove and update) are immediately persisted into the database, the programmer can, however, override most methods, without changing this behavior. 

Let’s start with the interface that defines CRUD operations:

public interface IDataRepository : IDisposable

{

}

public interface IDataRepository<T> : IDisposable, IDataRepository where T : class, IIdentifiableEntity, new()

{                      

    T Add(T entity);

    IEnumerable<T> Add(IEnumerable<T> entities);

    T Remove(T entity);

    IEnumerable<T> Remove(Expression<Func<T, bool>> predicate);

    IEnumerable<T> Remove(IEnumerable<T> entities);

    T Update(T entity);

    IEnumerable<T> Update(IEnumerable<T> entities);

    IQueryable<T> Get();

    IQueryable<T> Get(Expression<Func<T, bool>> predicate);

    T FirstOrDefault(Expression<Func<T, bool>> predicate);

}

These will be our basic operations for each repository that will implement the IDataRepository<T> contract.

Since the “Get” operations return IQueryable calling the methods will not cause an immediate database hit. For instance:

var results = userRepository.Get();

var user = results.First(c => c.Id == 1); //database hit!

The first line will not cause a complete select on the database, only certain instructions like .First(), ToList(), etc… The same rules of the DbSet apply to our repository. You must, however, do this before the repository runs out of scope. If you don’t, you will get a “Object has been disposed” exception:

IEnumerable<User> getUser = null;

using (var userRepository = new UserRepository())

{

    getUser = userRepository.Get()

      .Where(c => c.Name.Contains(“Filipe”));

}

getUser.FirstOrDefault();//Object has been disposed exception

All the other operations (Remove, Updated, Add and FirstOrDefault) will cause an immediate database hit.

Our model uses 2 simple entities and has a very bad name (context):

public class User : IIdentifiableEntity<int>

{

    public User()

    {

        Roles = new HashSet<Role>();

    }

    public int Id { get; set; }

    [StringLength(64)]

    public string Name { get; set; }

    public virtual ICollection<Role> Roles { get; set; }

}

public class Role : IIdentifiableEntity<int>

{

    public Role()

    {

        Users = new HashSet<User>();

    }

    public int Id { get; set; }

    [StringLength(64)]

    public string Name { get; set; }

    public virtual ICollection<User> Users { get; set; }

}

public class Context : DbContext

{

    public Context() : base(“Connection”) { }

    public virtual DbSet<User> UserSet { get; set; }

    public virtual DbSet<Role> RoleSet { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)

    {

        //many-to-man relation

        modelBuilder.Entity<Role>()

            .HasMany(e => e.Users)

            .WithMany(e => e.Roles)

            .Map(m => m.ToTable(“UserRoles”)

            .MapLeftKey(“RoleId”)

            .MapRightKey(“UserId”));

    }

}

This model generates the following tables in SQL Server.

clip_image001

The IIdentifiableEntity interface is there so you can easily identify each entity straight away. It’s another little trick that helps performance and also helps to separate entities from other POCOs (each database entity must use this interface).

public interface IIdentifiableEntity

{

}

public interface IIdentifiableEntity<T>: IIdentifiableEntity

{

    T Id { get; set; }

}

And now, for the interesting part. The repository base class itself:

public abstract class DataRepositoryBase<T, U> :

    IDisposable,

    IDataRepository<T>

    where T : class, IIdentifiableEntity, new()

    where U : DbContext, new()

{

 

    private U _context = null;

    protected U Context

    {

        get { return _context; }

    }

 

    public DataRepositoryBase()

    {

        _context = new U();

        _context.Configuration.AutoDetectChangesEnabled = false;    

    }

 

    public T Add(T entity)

    {

        T result = AddEntity(Context, entity);

        Context.SaveChanges();

        SetState(result, EntityState.Detached);

        return result;

    }

 

    protected virtual T AddEntity(U context, T entity)

    {

        SetState(entity, EntityState.Added);

        return entity;

    }

 

    public IEnumerable<T> Add(IEnumerable<T> entities)

    {

        var results = AddEntities(Context, entities);

        Context.SaveChanges();

        SetState(results, EntityState.Detached);

        return results;

    }

 

    protected virtual IEnumerable<T> AddEntities(U context, IEnumerable<T> entities)

    {

        SetState(entities, EntityState.Added);

        return entities;

    }

 

    public IEnumerable<T> Remove(Expression<Func<T, bool>> predicate)

    {

        var results = RemoveEntities(Context, predicate);

        Context.SaveChanges();

        SetState(results, EntityState.Detached);

        return results;

    }

 

    protected virtual IEnumerable<T> RemoveEntities(U context, Expression<Func<T, bool>> predicate)

    {

        var entities = context.Set<T>().Where(predicate);

        SetState(entities, EntityState.Deleted);

        return entities;

    }

 

    public T Remove(T entity)

    {

        var result = RemoveEntity(Context, entity);

        Context.SaveChanges();

        SetState(entity, EntityState.Detached);

        return result;

    }

 

    protected virtual T RemoveEntity(U entityContext, T entity)

    {

        SetState(entity, EntityState.Deleted);

        return entity;

    }

 

    public IEnumerable<T> Remove(IEnumerable<T> entities)

    {

        var results = RemoveEntities(Context, entities);

        Context.SaveChanges();

        SetState(entities, EntityState.Detached);

        return results;

    }

 

    protected virtual IEnumerable<T> RemoveEntities(U context, IEnumerable<T> entities)

    {

        SetState(entities, EntityState.Deleted);

        return entities;

    }

 

    public T Update(T entity)

    {

        var result = UpdateEntity(Context, entity);

        Context.SaveChanges();

        SetState(entity, EntityState.Detached);

        return result;

    }

 

    protected virtual T UpdateEntity(U context, T entity)

    {

        SetState(entity, EntityState.Modified);

        return entity;

    }

 

    public IEnumerable<T> Update(IEnumerable<T> entities)

    {

        var results = UpdateEntities(Context, entities);

        Context.SaveChanges();

        SetState(entities, EntityState.Detached);

        return results;

    }

 

    protected virtual IEnumerable<T> UpdateEntities(U context, IEnumerable<T> entities)

    {

        SetState(entities, EntityState.Modified);           

        return entities;

    }

 

    public IQueryable<T> Get()

    {

        var results = GetEntities(Context);

        return results;

    }

 

    protected virtual IQueryable<T> GetEntities(U context)

    {

        return context.Set<T>().AsNoTracking();

    }

 

    public IQueryable<T> Get(Expression<Func<T, bool>> predicate)

    {

        var results = GetEntities(Context, predicate); ;

        return results;

    }

 

    protected virtual IQueryable<T> GetEntities(U entityContext, Expression<Func<T, bool>> predicate)

    {

        return entityContext.Set<T>().AsNoTracking().Where(predicate);

    }

 

    public T FirstOrDefault(Expression<Func<T, bool>> predicate)

    {

        return FirstOrDefaultEntity(_context, predicate);

    }

 

    protected virtual T FirstOrDefaultEntity(U entityContext, Expression<Func<T, bool>> predicate)

    {

        return entityContext.Set<T>().AsNoTracking().FirstOrDefault(predicate);

    }

 

    protected void SetState(IEnumerable<T> entities, EntityState state)

    {

        foreach (var entity in entities)

        {

            SetState(entity, state);

        }

    }

 

    protected void SetState(T entity, EntityState state)

    {

        Context.Entry(entity).State = state;

    }

 

    #region IDisposable Members

    ~DataRepositoryBase()

    {

        Dispose(false);

    }

 

    public void Dispose()

    {

        Dispose(true);

        GC.SuppressFinalize(this);

    }

 

    protected virtual void Dispose(bool disposing)

    {

        if (disposing)

        {

            // free managed resources

            if (_context != null)

            {

                _context.Dispose();

                _context = null;

            }

        }

    }

    #endregion

}

The idea is that each entity should implement its own repository, and they can be simple or complex. Let’s start with the simplest form of repository, an empty one.

public abstract class RepositoryBase<T> : DataRepositoryBase<T, Context>

    where T : class, IIdentifiableEntity, new()

{

}

public interface IUserRepository : IDataRepository<User>

{

}

public class UserRepository : RepositoryBase<User>, IUserRepository

{

}

The UserRepository is now ready for use and it has the basic operations, inherited from the base class DataRepositoryBase.

If you wanted to use it, you could do it like so:

using (var userRepository = new UserRepository())

{            

   var user1 = new User { Name = “Filipe” };

   userRepository.Add(user1);

   var test = userRepository.Get()

         .Where(c => c.Name.Contains(“Filipe”))

         .ToList();

}

What if we want a more complex repository? Well you could implement your own repository with all the custom methods you like.

public interface IRoleRepository

{

    List<User> UsersInRole(Role role);

    List<Role> RolesForUser(User user);

    void AddUserToRole(User user, Role role);

}

public class RoleRepository: RepositoryBase<Role>, IRoleRepository

{

    protected override IEnumerable<Role> AddEntities(Context context, IEnumerable<Role> entities)

    {

        //you can provide a completey new implementation to add, but don’t have to save changes.

        return base.AddEntities(context, entities);

    }

    public List<User> UsersInRole(Role role)

    {

        var users = Context.RoleSet

            .AsNoTracking()

            .First(c => c.Id == role.Id)

            .Users;

        return users.ToList();

    }

    public List<Role> RolesForUser(User user)

    {

        var roles = Context.UserSet

            .AsNoTracking()

            .First(c => c.Id == user.Id)

            .Roles;

        return roles.ToList();

    }

    public void AddUserToRole(User user, Role role)

    {

        var ctxUser = Context.UserSet.Find(user.Id);

        var ctxRole = Context.RoleSet.Find(role.Id);

        ctxUser.Roles.Add(ctxRole);

        Context.ChangeTracker.DetectChanges();

        Context.SaveChanges();           

    }

}

 So the idea is to keep all entities, by default, detached from the context and also to allow method overriding without compromising the behavior of the repository (saves changes after method invocation).

One of the interesting things that happens with this implementation is in the many-to-many relations. There isn’t an explicit entity to map the User-Role relationship (UsersRole in this case). How to add a new relation in such cases. The simplest way I could determine is just to call .DetectChanges(). This method will determine that a new item was added to an associated collection, change its state, and then you just call .SaveChanges(). Remember, you should expect all entities to be disconnected, so for this to work, you use entities that are binded to the context (ctxUser and ctxRole).

And that’s it… Another simple recipe to simplify your code.

Thanks.

In today’s post I’ll show how we (some of the code was written by my friend Nuno Moura, thank you) use views with EF6. This, apparently simple, post came into discussion after a recurrent debate with my colleagues about the pros and cons of using lazy loading and eager loading. To tweak the performance we often must choose case by case to determine what is better (lazy or eager loading, lazy seems to be the default for multi-layer applications I see being developed lattely). To measure these 2 options one must analyze the generated queries, the number of hits we are making to the database, the actual code (and LINQ expressions) in short a very costly process.

I propose a small step back and say you should use, when necessary: views in EF (… he said whaaaat?)

Let’s imagine the following example (based in the very old aspnet_membership) with lazy loading active (the default EF setting):

clip_image001

using (SecurityModel context = new SecurityModel())

{

    var users = context.aspnet_Users;

    foreach (var user in users)

    {

        foreach (var role in user.aspnet_Roles)

        {

            Console.WriteLine(“user {0} has {1} role”, user.UserName, role.RoleName);

        }                   

    }

}

Console.ReadKey();

By default each “hit” on users will cause the generation of one particular query to fetch the user role(s).

I’m using Express Profiler (v2.1) to prove just that.

clip_image003

The first selected line selects all users (triggered by the first foreach) while the other selected line has an example of one single select to read the role(s) (triggered by the second foreach). This obviously is a very inefficient way to resolve this problem and you can preload all information, like so:

using (SecurityModel context = new SecurityModel())

{

    var users = context.aspnet_Users.Include(“aspnet_Roles”);

    foreach (var user in users)

    {                   

        foreach (var role in user.aspnet_Roles)

        {

            Console.WriteLine(“user {0} has {1} role”, user.UserName, role.RoleName);

        }

    }

}

Console.ReadKey();

The first call will trigger the return of all the information.

clip_image005

But is this enough? What about scenarios for partially loaded information, when you might need more complex logic to load your information, or about loading many related tables? For argument sakes let’s imagine, you don’t need to load records from “Roles” that are “inactive” (Active = 0).

You could do something like this:

using (SecurityModel context = new SecurityModel())

{

    var users = context.aspnet_Users.SelectMany(p => p.aspnet_Roles.Where(c=>c.Active==true),

            (parent, child) => new { parent.UserName, child.RoleName });               

    foreach (var user in users)

    {

        Console.WriteLine(“user {0} has {1} role”, user.UserName, user.RoleName);

    }

}

 

clip_image006

This solves the problem but the complexity of your LINQ instructions become an issue with many related objects and, in this solution, you don’t benefit from having an entity that represents your result. Also you don’t control the generated SQL query (which, in the example, is good but not great).

The best way to overcome all these difficulties, in my opinion, is to pass this complexity back into the SQL Views. The advantages are better performance and a cleaner and more uniform data and service layers. Without shady hidden tricks to optimize some fetches or messy eager loading code (also you eliminate problems with developers that don’t quite know all details from LINQ).

To exemplify this technique I will create a new entity that has only the name and role of a user and only returns if the role is active. Something like this, if it were written in SQL:

IF EXISTS (

      SELECT * FROM sys.views

      WHERE object_id = OBJECT_ID(N’VW_UsersInRoles’)

)

      DROP VIEW VW_UsersInRoles

GO

CREATE VIEW VW_UsersInRoles AS

SELECT    

   dbo.aspnet_Roles.RoleName

  ,dbo.aspnet_Users.UserName

FROM        

      dbo.aspnet_Roles

      INNER JOIN dbo.aspnet_UsersInRoles ON dbo.aspnet_Roles.RoleId = dbo.aspnet_UsersInRoles.RoleId

      INNER JOIN dbo.aspnet_Users ON dbo.aspnet_UsersInRoles.UserId = dbo.aspnet_Users.UserId

WHERE

      dbo.aspnet_Roles.Active = 1

If I can just call this View and Map it to the corresponding entity I will get an extremely efficient code, fetching absolutely what is necessary in just one database hit. This, of course, is a very simple example but you can imagine much more complex ones.

First, let’s create the view inside the EF seeding

public class SecurityModelInitializer : DropCreateDatabaseIfModelChanges<SecurityModel>

{

    protected override void Seed(SecurityModel context)

    {

        var dropView = @”

IF  EXISTS (

       SELECT * FROM sys.views

       WHERE object_id = OBJECT_ID(N’VW_UsersInRoles’)

)

DROP VIEW VW_UsersInRoles”;

 

        var createView = @”

CREATE VIEW VW_UsersInRoles AS

SELECT    

    dbo.aspnet_Roles.RoleName

   ,dbo.aspnet_Users.UserName

FROM        

    dbo.aspnet_Roles

    INNER JOIN dbo.aspnet_UsersInRoles ON dbo.aspnet_Roles.RoleId = dbo.aspnet_UsersInRoles.RoleId

    INNER JOIN dbo.aspnet_Users ON dbo.aspnet_UsersInRoles.UserId = dbo.aspnet_Users.UserId

WHERE

    dbo.aspnet_Roles.Active = 1″;

 

        context.Database.ExecuteSqlCommand(dropView);

        context.Database.ExecuteSqlCommand(createView);

    }

}

public partial class SecurityModel : DbContext

{

    public SecurityModel()

        : base(“name=SecurityModel”)

    {

        Database.SetInitializer<SecurityModel>(new SecurityModelInitializer());

    }

    ///(…)

}

This piece of code is specific to TSQL but you can change it quickly to any other DBMS with minimum tweaking.

Now the corresponding entity class

[Table(“VW_UsersInRoles”)]

public class VW_UsersInRoles

{

    [StringLength(256)]

    public string RoleName { get; set; }

 

    [StringLength(256)]

    public string UserName { get; set; }

}

Next, the result set from executing this view.

using (SecurityModel context = new SecurityModel())

{

    var users = context.Database.SqlQuery<VW_UsersInRoles>(“SELECT * FROM VW_UsersInRoles”);

    foreach (var user in users)

    {

        Console.WriteLine(“user {0} has {1} role”, user.UserName, user.RoleName);

    }

}

Console.ReadKey();

 clip_image008

And that’s it… In one simple swoop you can do practically everything in the most streamlined way possible. I know that this will require a bit more effort than usual (by keeping another set of entities that represent the views and seeding them to the database) but the end result is the most performant possible. Once again this is a very simple example but when you’re using complex big business entities with lots of relations this will have a very significant impact in your performance.

Personally, the way I prefer to implement this, is to encapsulate everything inside the repository pattern and have view’s derive from a repository that only allows “Read” operations (not exemplified from brevity). I also recommend reflection to generate the views corresponding SQL (I’ll demonstrate the select, but the create code, in seeding, is also possible to do). Let’s take a look at this instruction:

var users = context.Database.SqlQuery<VW_UsersInRoles>(“SELECT * FROM VW_UsersInRoles”);

That looks awful and it could be improved by extending the Context.Database class.

public static class Extensions

{

    public static DbRawSqlQuery<TElement> GetView<TElement>(this Database element)

    {

        return element.SqlQuery<TElement>(GetView<TElement>());

    }

 

    public static string GetView<TElement>()

    {

        var type = typeof(TElement);

        var fieldList = type

                .GetProperties()

                .Select(c => c.Name);

        var table = type.Name;

        var fields = string.Join(“,”, fieldList);

        return “SELECT “ + fields + ” FROM “ + table;

    }

}

And so, we end up with:

using (SecurityModel context = new SecurityModel())

{

    var users = context.Database.GetView<VW_UsersInRoles>();

    foreach (var user in users)

    {

        Console.WriteLine(“user {0} has {1} role”, user.UserName, user.RoleName);

    }

}

Console.ReadKey();

You could do a couple more of fancy tricks with the Extension method GetView, but this is supposed to be a simple entry I don’t want to wonder off too far off topic.

Last notes: the name of the table is often set by the Context setup in code first, so you should improve (var table = type.Name;) and find out what is the actual mapping of the view ([Table(“VW_UsersInRoles”)]).

 

After you have transferred some of the fetching logic back into the database you can now delegate the performance optimization to the dba. This is where you will make the real impact in your application. To improve VIEW PERFORMANCE I recommend reading a really cool article by Jes Borland : https://www.simple-talk.com/sql/learn-sql-server/sql-server-indexed-views-the-basics/

Like I said in the beginning, some of the code was written by Nuno Moura, thank you once again.

A while back I’ve posted a way to intercept methods based on (the now deprecated) Castle Dynamic Proxy using a simple trick involving Interfaces (here: https://srramalho.wordpress.com/2014/10/01/c-intercept-method-calls/). This article uses the new Castle.Core (3.3.3) (Nuget Install-Package Castle.Core).

I personally use these “interceptors” to log (or audit) special methods. The idea is “grab” the arguments of a method without changing the code too much.

There are some limitations to what you can do with this but it’s pretty useful for simple scenarios. The main code, looks like this:

using Castle.DynamicProxy;

namespace Blog.Interceptor

{

    public class Proxy<TInterceptedClass> where TInterceptedClass : class

    {

        public static TInterceptedClass CreateObject(IInterceptor interceptor)

        {

            return new ProxyGenerator()

.CreateClassProxy

<TInterceptedClass>

(interceptor);

        }

    }

}

To intercept a class you will have to instantiate objects using the Proxy class.

class Program

{

    static void Main(string[] args)

    {

        var i = new Interceptor();

        var cp = Proxy<InterceptedClass>.CreateObject(i);

        cp.Method1(20, 40f);

        Console.ReadLine();

    }

}

In this example I want to capture the values “20” and “40f”. The proxy creates the object while the interceptor will process the request. The following code explains the rest.

public class Interceptor : IInterceptor

{

    public void Intercept(IInvocation invocation)

    {

        Console.WriteLine(“Preprocess”);

        Console.WriteLine(“Method {0}”, invocation.Method.Name);

        foreach (var item in invocation.Arguments)

        {

            Console.WriteLine(“type:{0}, value:{1}”,

item.GetType().ToString(), item.ToString());

        }

        Console.WriteLine(“Call method”);

        invocation.Proceed();

        Console.WriteLine(“Post process”);

    }

}

public class InterceptedClass

{

    public virtual void Method1(int x, float z)

    {

        Console.WriteLine(“Called Method 1”);

    }

}

Note that, for this to work, the methods must be marked as virtual.image

And that’s pretty much it. Check out my other article: https://srramalho.wordpress.com/2014/10/01/c-intercept-method-calls/

In today’s post I’ll write a recipe for logging (or tracing) in .Net. There are many ways to go around this, just google “.net logging” (or check out http://www.dotnetlogging.com/). There are just too many options out there, some are simple while others are more complicated. After doing some research I’ve started to look a bit closer to the very respectable Enterprise Library. I’m not going to argue why this is better or worse than all the others it just does all the things I need it to do (and much more) and it’s very well documented.

Setup

First the setup. Unfortunately there is no Visual Studio Extension for current versions (only 2012 and lower) available on NuGet (no idea why). Fortunately you can download the extensions for VS2013 and VS2015. Download and install. This is an extension to help us configure the configuration files.

image

Use NuGet to add Enterprise Library – Logging Application Block. The Common Infrastructure should be installed as a dependent library. As always you can use the Console with: Install-Package EnterpriseLibrary.Logging

Architecture

To understand this framework, you must understand its architecture.

image

The next piece of text is from MSDN and explains, better than I could, what each stage is. So here it is taken from https://msdn.microsoft.com/en-us/library/dn440731(v=pandp.60).aspx, and I quote:

Stage

Description

Creating the Log Entry

The user creates a LogWriter instance, and invokes one of the Write method overloads. Alternatively, the user can create a new LogEntry explicitly, populate it with the required information, and use a LogWriter to pass it to the Logging block for processing.

Filtering the Log Entry

The Logging block filters the LogEntry (based on your configuration settings) for message priority, or categories you added to the LogEntry when you created it. It also checks to see if logging is enabled. These filters can prevent any further processing of the log entries. This is useful, for example, when you want to allow administrators to enable and disable additional debug information logging without requiring them to restart the application.

Selecting Trace Sources

Trace sources act as the link between the log entries and the log targets. There is a trace source for each category you define in the logging block configuration; plus, there are three built-in trace sources that capture all log entries, unprocessed entries that do not match any category, and entries that cannot be processed due to an error while logging (such as an error while writing to the target log).

Selecting Trace Listeners

Each trace source has one or more trace listeners defined. These listeners are responsible for taking the log entry, passing it through a separate log formatter that translates the content into a suitable format, and passing it to the target log. Several trace listeners are provided with the block, and you can create your own if required.

Formatting the Log Entry

Each trace listener can use a log formatter to format the information contained in the log entry. The block contains log message formatters, and you can create your own formatter if required. The text formatter uses a template containing placeholders that makes it easy to generate the required format for log entries.

(End quote, retrieved on December 2015)

This all seems quite complicated, but it will turn clearer with some simple examples. Create a Console application and add a Configuration File (app.config). Right clicking on the file will, if you installed the extension correctly, make the “Edit Configuration file” option visible.

clip_image006

You should now be seeing the configuration editor. If you’ve used Enterprise Library without this handy tool you will see how much less work is involved in configuring it all. In the “Blocks” menus, choose “Add Logging Settings”.

clip_image008

I’ve added a “Flat File Trace Listener” and then configured “General” and “Logging Errors & Warnings” to use the “Flat File Trace Listener”.

clip_image010

I’ve also configured the “Flat File Trace Listener” to log to a file called trace.log.

static void Main(string[] args)

{

    Logger.SetLogWriter(new LogWriterFactory().Create());           

    LogEntry entry = new LogEntry();

    entry.Message = “I am logging”;           

    Logger.Write(entry);

}

And there you go, you’re now logging using the LogEntry object. A quick note about the first line, it’s there to set up the logger so it sets up the static Write Method from Logger. MSDN uses Unity to handle the Dependency Injection part of this example. I don’t want to go there just to keep things simple.

clip_image011

Obviously Write has many overloads and LogEntry has many properties.

clip_image013

Image, using the described configuration, you decide to log a message using the “Debug” category. Such category does not exist in your configuration.

static void Example2()

{          

    Logger.Write(“Message”, “Debug”, 1, 3000, TraceEventType.Information, “title”);

}

clip_image014

The logger will still log the information but since there is no “debug” category, it will end up in the log file. Our message, by default, seems to have ended up wrapped in another error log entry message, this log was captured by the “Logging Errors & Warnings” special category. You could also have configured the “Unprocessed Category” although I don’t recommend it because it seems unorganized to me.

So you can go ahead and create the “Debug” category and rerun the same code.

clip_image016

clip_image017

Filters are used to determine what messages end up in the logger. Image the scenario where you only wish to log a particular level of severity (higher than 10, for instance).

Add two filters, “Priority Filter” and the “Logging Enable Filter”.

clip_image019

These filters are basically useful to prevent collecting too much information in production environments. The “Logging Enable Filter” last one will deactivate all or none of the logging (it’s an all or nothing kind of thing).

static void Example3()

{

    Logger.Write(“Message”, “Debug”, 20); //priority 20, log

    Logger.Write(“Message”, “Debug”, 5); //priority 5, ignore

}

clip_image020

This is the basic concept, we can also filter based on category, priority or based on a custom filter.

There is a very interesting feature called the “Rolling Flat File Trace Listener”, this will prevent your system to create too many log files. In this example I’ve configured it to re-write each file every minute and to keep only 2 in the system. So every minute it will change from one file to another and overwrite the file.

clip_image022

You can also do this based on size.

If you need to log to the database you must add another NuGet package called “Enterprise Library – Logging Application Block Database Provider”.

clip_image024

If you look up in the “Packages” folder you will find the scripts that create the database and necessary structures.

clip_image026

Then configure the “Database Settings” with the correct database connection string.

clip_image028

clip_image030

And setup your logger like so:

DatabaseFactory.SetDatabaseProviderFactory(new DatabaseProviderFactory());

clip_image032

So now we have a couple of options to filter, categorize and store the information.

clip_image034

I’m going to ignore formatters because they are pretty self-explanatory. The next point to address is the information to store.

static void Example5()

{

    var logEntry = new LogEntry()

    {

        Message = “A lot more info to log”,

        Categories = new List<string>() { “Debug” },

    };

    var extendendInfo = new Dictionary<string, object>();

 

    var debug = new DebugInformationProvider();

    debug.PopulateDictionary(extendendInfo);

 

    var managedInfo = new ManagedSecurityContextInformationProvider();

    managedInfo.PopulateDictionary(extendendInfo);

    extendendInfo.Add(“Some more information”, “other information”);

    logEntry.ExtendedProperties = extendendInfo;

    Logger.Write(logEntry);

}

In this little example we are staking up a lot of information, the debug variable is probably the most important because the stack trace is in there.

clip_image035

Lately I’ve been busy been working on some projects based in SOA using WCF as the “glue” between the services and service consumers. The architecture was inspired by Miguel Castro’s, specifically the one he proposes in plurasight, with my own tweaks. One of those tweaks is the need to preemptively load related information into my business entities, mimicking the eager load from Entity Framework in navigational properties. This post is about implementing an eager loading mechanism into the service. 

Image the following scenario:

PRT1

In this EF model every Folder has, for instance, a list of owned Emails and has a User. The auto generated code, of this entity looks like this.

public partial class Folder : IEagerLoaderObject

{

    public Folder()

    {

        this.Emails = new HashSet<Email>();

    }   

    public int Id { get; set; }

    //other properties omitted for clarity   

    public User User { get; set; }

    public ICollection<Email> Emails { get; set; }

}

Manipulating the t4 template, I’ve added the following Interface to the automated definition. This is an empty interface. public interface IEagerLoaderObject { }

public partial class User : IEagerLoaderObject

The navigation methods are not marked as virtual so lazy loading is not expected to occur (I’ve turned lazy loading explicitly off). Executing the code:

var x = context.Folders.FirstOrDefault(u => u.Id == 1);           

Will not fill “x” with the navigation properties (User and Email). Which is exactly what we expect.

image

Getting this information would imply doing this:

var y = context.Folders.Include(“User”).FirstOrDefault(u => u.Id == 1);  

clip_image006

So how can we build a service that mimics the eager loading behavior present in EF?

 Before I answer this question let me show how I would like the service to look like.

[ServiceContract]

public interface IFolderService

{

    [OperationContract]

    Folder GetFolder(int id);

    [OperationContract]

    Folder GetFolderWithDependencies(int id,

InformationCallChain eagerLoader);

}

While GetFolder returns the folder with no loaded information, GetFolderWithDependencies should let me decide what to load (user, emails or both).

//This is service reference I’ve added, it references the WCF service FolderService

FolderServiceClient client = new FolderServiceClient();

 

EagerLoader<Folder> eagerLoader = new EagerLoader<Folder>();

//loads the user information (entity)

eagerLoader.Reference(c => c.User);

// loads all emails (related collection)

eagerLoader.Collections(c => c.Emails);

 

client.GetFolderWithDependencies(1, eagerLoader.GetCallChain());

The eager loader is divided into two components, the factory and the call chain. The factory builds call chains based on type. It also uses the concept of collection and reference. A reference in an entity, like User, while a collection is a list of related objects, like Folders. You can chain, multiple navigation properties, using intelisence, like so:

// loads User and all emails for that user

eagerLoader.Collections(c => c.User.Emails);

The call chain represents a number of sequential includes. Now let’s look at the service itself.

public class FolderService : IFolderService

{

    public Folder GetFolder(int id)

    {

        UsersContext context = new UsersContext();

        return context.Folders.FirstOrDefault(u => u.Id == id);

    }

    //this is a simplified example of the information call

    public Folder GetFolderWithDependencies(int id,

InformationCallChain eagerLoader)

    {

        UsersContext context = new UsersContext();

        var folder = context.Folders.Where(u => u.Id == id);

        var query = ProcessIncludeChain<Folder>(folder, eagerLoader.GetCallMembers());           

        return query.First();

    }

    //this entry point

    public static IQueryable<T> ProcessIncludeChain<T>(IQueryable<T> set, IEnumerable<string> list)

    {

        return ProcessIncludeChain<T>(set, list, 0);

    }

    //this part transforms the call chain into actual includes

    private static IQueryable<T> ProcessIncludeChain<T>(IQueryable<T> set, IEnumerable<string> list, int idx)

    {

        string inc = list.ElementAt(idx);

        IQueryable<T> ret = set.Include(inc);

 

        if (idx < list.Count() – 1)

        {

            ret = ProcessIncludeChain<T>(ret, list, idx + 1);

        }

        return ret;

    }

}

For this example ProcessIncludeChain is in the service for clarity, but in actual production scenarios I keep it in my repository layer.

The final piece of the puzzle is the call chain and call chain factory. It’s quite a big piece of code, but here it is:

[DataContract]

public class EagerLoader<TEntity> where TEntity : IEagerLoaderObject

{

    InformationCallChain _callChain;

    public EagerLoader()

    {

        _callChain = new InformationCallChain();

    }

    public EagerLoader(InformationCallChain callChain)

    {

        this._callChain = callChain;

    }

    public EagerLoader<TProperty> Reference<TProperty>(Expression<Func<TEntity, TProperty>> navigationProperty)

        where TProperty : IEagerLoaderObject

    {

        var name = ((MemberExpression)navigationProperty.Body)

.ToString();

        var type = InformationCallChain.CallType.Reference;

        string endName = name.Split(‘.’).RemoveAt(0).Join(“.”);

        _callChain.AppendMember(endName, type);

        return new EagerLoader<TProperty>((InformationCallChain)_callChain.Copy(_callChain.Idx – 1));

    }

    public InformationCallChain GetCallChain()

    {

        return _callChain;

    }

    public EagerLoader<TProperty> Collections<TProperty>(Expression<Func<TEntity, IEnumerable<TProperty>>> navigationProperty)

        where TProperty : IEagerLoaderObject

    {

        var name = ((MemberExpression)navigationProperty.Body)

.ToString();

        var type = InformationCallChain.CallType.Reference;

        string endName = name.Split(‘.’).RemoveAt(0).Join(“.”);

        _callChain.AppendMember(endName, type);

        return new EagerLoader<TProperty>((InformationCallChain)_callChain.Copy(_callChain.Idx – 1));

    }

}

[DataContract]

public class InformationCallChain : ICloneable

{

    public enum CallType

    {

        Reference,

        Collection,

    }

    [DataContract]

    public class CallInfo

    {

        [DataMember]

        string _member;

        [DataMember]

        CallType _callType;

        public CallType CallType

        {

            get { return _callType; }

        }

        public string Member

        {

            get { return _member; }

        }

        public CallInfo(string member, CallType callType)

        {

            this._callType = callType;

            this._member = member;

        }

    }

    [DataContract]

    class InformationCall

    {

        public CallType CallType;

        public string MemberName = string.Empty;

        public InformationCall(CallType callType)

        {

            this.CallType = callType;

        }

        public InformationCall(string memberName, CallType callType)

        {

            this.CallType = callType;

            this.MemberName = memberName;

        }

    }

    [DataMember]

    Dictionary<int, List<CallInfo>> _dicLoading;

    [DataMember]

    int _idx;

    public int Idx

    {

        get { return _idx; }

    }

    public InformationCallChain()

    {

        _dicLoading = new Dictionary<int, List<CallInfo>>();

    }

    public void AppendMember(string memberName, CallType callType)

    {

        if (_dicLoading.ContainsKey(this._idx))

            _dicLoading[this._idx].Add(new CallInfo(memberName, callType));

        else

        {

            var lst = new List<CallInfo>();

            lst.Add(new CallInfo(memberName, callType));

            _dicLoading.Add(this._idx, lst);

        }

        this._idx++;

    }

    public InformationCallChain Copy(int idx)

    {

        InformationCallChain ret = (InformationCallChain)Clone();

        ret._idx = idx;

        return ret;

    }

    public Dictionary<int, List<CallInfo>> GetCallInfoDictionary()

    {

        return _dicLoading;

    }

    public List<string> GetCallMembers()

    {

        List<string> ret = new List<string>();

        foreach (var entry in _dicLoading)

        {

            string members = string.Empty;

            string prefix = string.Empty;

            foreach (InformationCallChain.CallInfo list in entry.Value)

            {

                if (members.Length > 0)

                    prefix = “.”;

                members += prefix + list.Member;

            }

            ret.Add(members);

        }

        return ret;

    }

    #region ICloneable Members

    public object Clone()

    {

        return this.MemberwiseClone();

    }

    #endregion

}

This code, although lengthy, is pretty straight forward. It transforms the expression into something like “<context>.Include(“Users.Foldes.etc”).

So putting this all together makes it possible for clients (service consumers) to consume the services with more control over the amount of information that’s being recovered over the wire (intranet, internet, etc). Here is a snippet of production code so you can see how simple this method is.

EagerLoader<License> eagerLoader = new EagerLoader<License>();

eagerLoader.Reference(c => c.TypeOfLicense);

eagerLoader.Reference(c => c.Account);

eagerLoader.Reference(c => c.Account.Owner);

eagerLoader.Reference(c => c.Account.Application);

 

LicensingClient proxy = new LicensingClient();

IEnumerable<License> licenses =

proxy.GetLicensesWithDependency(eagerLoader

.GetCallChain());

In this case the client is consuming a bunch of information associated with the License entity in one single access, meanwhile, all other accesses, by default, return no associated information (which is useful in most of the cases).

That’s it folks… Hope this can be useful

 

 

 

In today’s post I will demonstrate how to serialize a MethoInfo object so you can latter, deserialize and invoke it. This code is part of a more complex system that logs code to be executed in a database for very long periods of time (let’s say weeks). To prevent developing a complex scheduler this system just dumps the info on to a database so it can be run much latter. It can also recover nicely from system failure/downtime.

The Validation class shows what I want to log.

public class Validation

{

    public bool MethodToLog(object[] args)

    {

        if ((int)args[0] == 1)

            return false;               

        return true;

    }

 

    public static bool MethodToLog2(object[] args)

    {

        if ((int)args[0] == 1 && (int)args[1] == 1)

            return true;

        return false;

    }

}

The next code is the main, how to serialize and invoque the logged data from and into files. You can decide to use MemoryStreams (or any other kind) or byte[] to send the information directly to a data base, for instance.

static void Main(string[] args)

{

    Validation val = new Validation();   

    object[] args1 = new object[] { 1 };

    object[] args2 = new object[] { 1 , 2};

    MethodSerializer ms = new MethodSerializer();

    //serialize instance method

    using (FileStream fileStream = File.Create(“method.dat”))

    {

        ms.SerializeMethodCall(fileStream, val.MethodToLog);

    }

    //serialize static method

    using (FileStream fileStream = File.Create(“method2.dat”))

    {

        ms.SerializeMethodCall(fileStream, Validation.MethodToLog2);

    }

    //deserialize instance method

    using (FileStream fileStream = new FileStream(“method.dat”, FileMode.Open))

    {

        ms.InvokeSerializedMethod<Validation>(fileStream, args1);

    }

    //deserialize static method

    using (FileStream fileStream = new FileStream(“method2.dat”, FileMode.Open))

    {

        ms.InvokeSerializedMethod(fileStream, args2);

    }

}

And finally, the class that handles the serialization and invocation of the code.

public class MethodSerializer

{

 

    public void SerializeMethodCall(Stream stream, Func<object[], bool> function)

    {                               

        MethodInfo methodInfo = function.GetMethodInfo();

        BinaryFormatter formatter = new BinaryFormatter();

        formatter.Serialize(stream, methodInfo);               

    }

 

    public Stream SerializeMethodCall(Func<object[], bool> function)

    {

        Stream stream = new MemoryStream();

        SerializeMethodCall(stream, function);

        return stream;

    }

 

    public void InvokeSerializedMethod<T>(Stream stream, object[] args) where T: new()

    {               

        BinaryFormatter formatter = new BinaryFormatter();               

        MethodInfo methodInfo = (MethodInfo)formatter.Deserialize(stream);

        methodInfo.Invoke(new T(), new object[]{args});

    }

 

    public void InvokeSerializedMethod(Stream stream, object[] args)

    {

        BinaryFormatter formatter = new BinaryFormatter();

        MethodInfo methodInfo = (MethodInfo)formatter.Deserialize(stream);

        methodInfo.Invoke(null, new object[] { args });

    } 

}

And that’s it. A simplified version of a system that serializes and invokes code. There are, however, a couple of considerations… You could improve the arguments used, I’m using object[], for exemplification but in actuality you could (and should) introduce some type safety.

By the way, here is a useful quote from Wikipedia:

In computing, type introspection is the ability of a program to examine the type or properties of an object at runtime. Some programming languages possess this capability.

Introspection should not be confused with reflection, which goes a step further and is the ability for a program to manipulate the values, meta-data, properties and/or functions of an object at runtime.

Hi, today I’ll be posting a very simple way to intercept method calls by using Castle Dynamic Proxy.

This is a very simple technique that will intercept any call made to a method and, from there, you can pre and post process with your own code. I use this technique to monitor some calls, especially to monitor what args are being sent into them.

First, define the IServiceBase and the IService interface

public interface IServiceBase

{

}

public interface IService : IServiceBase

{

    void DoSomething(int x);

}

Then, implement the class that implements the IService interface:

public class Service : IService

{

     #region IService Members

     public void DoSomething(int x)

     {

         Console.WriteLine(“Service.DoSomething(int x)”);

     }

     #endregion

     public void SomethingElse(string x)

     {

         Console.WriteLine(“Service.SomethingElse(string x)”);

     }       

}

And finally, the factory where the magic happens.

public class ServiceFactory

{

     public static TInterface CreateService<TImplementation, TInterface>()

         where TInterface : class, IServiceBase

         where TImplementation : TInterface, IServiceBase, new()

     {

         var dp = new ProxyGenerator();

         TInterface target = new TImplementation();

         IInterceptor interceptor = new LogInterceptor();

         return dp.CreateInterfaceProxyWithTarget<TInterface>(target, interceptor);

     }

}

 

public class LogInterceptor : IInterceptor

{

     #region IInterceptor Members

     public void Intercept(IInvocation invocation)

     {

         Console.WriteLine(“Method {0}”, invocation.Method.Name);

         foreach (var item in invocation.Arguments)

         {

             Console.WriteLine(“type:{0}, value:{1}”, item.GetType().ToString(), item.ToString());   

         }

         invocation.Proceed();

         Console.WriteLine(“Post process”);

     }

     #endregion

}

To test this recipe

class Program

{

     static void Main(string[] args)

     {

         IService service1 = ServiceFactory.CreateService<Service, IService>();

         service1.DoSomething(10);

         Service service2 = new Service();

         service2.SomethingElse(“xx”);

         Console.ReadLine();

     }

}

Only operations defined in the IService contract are interceptable and only if created via the ServiceFactory.

You can add Castle Dynamic Proxy to your project using nuGet. If you want to load code form outside your code (another project) you must add [assembly: InternalsVisibleTo(“DynamicProxyGenAssembly2”)] to your client assembly file.

Here is the print screen:

image Hope this can be useful to you.