Tag Archives: c#

Using OpenCover and NUnit with MSBuild

I am working on a project that has a large code base with a lot of logic. For this project I am trying to adopt a more TDD oriented development approach. This includes unit- and integration testing, as well as code coverage, and yes I do know that a high coverage percentage is not the same as good tests, but on the other hand, a low coverage indicates that more tests need to be written.

We are currently using NUnit.2.5.10.11092 and the code coverage tools I was able to find was NCover, PartCover and OpenCover. NCover is no longer free, and from what I can read it seems like OpenCover is the better choice compared to PartCover.

This blog post describes well some of the differences and features of PartCover vs. OpenCover.

MSBuild

This is the MSBuild configuration file that we are currently using to run unit test and code coverage.

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">

  <!-- Application Configurations -->
  <PropertyGroup>
    <NUnit-ToolPath>..\packages\NUnit.2.5.10.11092\tools\</NUnit-ToolPath>
    <OpenCover-ToolPath>..\packages\OpenCover.1.0.719\</OpenCover-ToolPath>
    <ReportGenerator-ToolPath>..\packages\ReportGenerator.1.2.1.0\</ReportGenerator-ToolPath>
    <MSBuildCommunityTasksPath>$(MSBuildProjectDirectory)\..\Libs\msbuild\</MSBuildCommunityTasksPath>

    <!-- Add paths to assemblies to test -->
    <TestAssembliesPath>..\Tests\Core.Tests\bin\Debug\Core.Tests.dll ..\Tests\Model.Tests\bin\Debug\Model.Tests.dll</TestAssembliesPath>

  </PropertyGroup>

  <ItemGroup>
    <AllProjects Include="..\Tests\**\*.csproj" />
  </ItemGroup>

  <Target Name="Clean">
    <MSBuild Projects="@(AllProjects)" Targets="Clean" />
  </Target>

  <Target Name="Compile">
    <MSBuild Projects="@(AllProjects)" Targets="Build" Properties="WarningLevel=1" />
  </Target>

  <Target Name="Test" DependsOnTargets="Clean;Compile">
    <!-- /domain=single is not support by the NUnit msbuild task, and it is required for OpenCover coverage test -->
    <Exec Command="$(NUnit-ToolPath)nunit-console-x86.exe $(TestAssembliesPath) /nologo /noshadow /domain=single /output=test-results.xml" />
  </Target>

  <Target Name="Coverage" DependsOnTargets="Clean;Compile">

    <!-- /domain=single is not support by the NUnit msbuild task, and it is required for OpenCover coverage test -->
    <Exec Command="$(OpenCover-ToolPath)OpenCover.Console.exe -register:user -target:"$(NUnit-ToolPath)nunit-console-x86.exe" -targetargs:"/noshadow $(TestAssembliesPath) /domain:single" -filter:"+[Model]* +[Core]* +[Datalayer]*" -output:coverage.xml" />
    <Delete Files=".\coveragereport" />
    <Exec Command="$(ReportGenerator-ToolPath)ReportGenerator.exe coverage.xml "coveragereport" html" />
    <Exec Command="$(ReportGenerator-ToolPath)ReportGenerator.exe coverage.xml "coveragereport" xmlsummary" />
    <Delete Files="coverage.xml" />
  </Target>

  <Target Name="Build" DependsOnTargets="Clean;Compile;Test;" />

</Project>

Running the different targets from command prompt is easy, just type “msbuild [MSBUILD CONFIG FILE].xml /t:[TARGET]”

[MSBUILD CONFIG FILE] should be the name of the file above, [TARGET] can be any of the following Clean, Compile, Test and Coverage.

The Coverage target generates a fine HTML report as well as a XML summary.

Please note that I have not used the NUnit task from MSBuild community as the /domain=single is important, and as it is currently not supported by that task.

Step 2 - Create solution and project

Cuda v3.2 template project using C++

Important

The tutorial is missing some compilation, linker, library settings steps. However you can download the template project, it works and has everything setup.

Introduction

I am a developer who has been developing software using .NET and C# for several years. I have never used C or C++, and it has never been required.

I like to investigate new technologies, mainly because I am curious, but also because it could make my daily development work easier or smarter.

Recently my focus has been directed towards GPGPU on the Nvidia Cuda platform.

The programming language for Cuda is called “Cuda C”. The name implies that knowledge of C indeed is required for using GPGPU on the Cuda platform.

I discovered that there exist .NET bindings to the Cuda platform and drivers. However, I find their usage complicated and insufficient, and further more kernel development will still have to done in Cuda C.

These fact made me realise that I would have to learn a bit of C and C++ to use Cuda as it was actually intended by Nvidia. Nvidia provides many samples and suggest that a Cuda development environment on Windows could use Visual Studio and Nvidia Parallel NSight for debugging, profiling etc.

As my knowledge of C and C++ development was severely limited, so was the setting up and configuration of Visual Studio 2008 for Cuda C development.

I have read “Cuda by example…” (http://developer.nvidia.com/object/cuda-by-example.html), “Programming Massively Parallel Processors…” (http://www.nvidia.com/object/io_1264656303008.html) and “C Programming Language, 2. edition” (http://www.pearsonhighered.com/educator/product/C-Programming-Language/9780131103627.page). These books have given me the foundation to start developing using Cuda C and GPGPU.

Setting up Visual Studio 2008 and making the compiler work required some work, but here is what I did.

1. Download and install driver and toolkit

Download Cuda toolkit and the developer driver and install. A restart is probably required. (http://developer.nvidia.com/object/gpucomputing.html)

2. Start Visual Studio 2008, and create a new project of type Win32 Console Application

Give the project and solution a name (here called Cuda_Template).

3. Click Next

4. Select Console application and check the empty project, then click Finish

5. Add new item called main.cpp of type C++ File

6. Add the following code the file

#include <stdio.h> 

int main() {

    printf("Hello world...\n);
    return 0;

}

7. Build and try to run the exe file. The output should be

8. Select Project -> Custom Build Rules…

9. Select Cuda Runtime API build rule (v3.2)

10. Add a new file called kernel.cu

11. Add the following to the file kernel.cu

/* power: raise base to n-th power; n >= 0 */
__device__ int devicePower(int base, int n) {

    int p = 1;

    for (int i = 1; i <= n; ++i) {
        p = p * base;
    }

    return p;
}

__global__ void power( int *base, int *n, int *output, int threadMax ) {

    int tid = threadIdx.x + blockIdx.x * blockDim.x;

    if (tid < threadMax) {
        output[tid] = devicePower(base[tid], n[tid]);
    }

}

12. Right click the newly created file and select properties

13. Set the “Exclude From Build” and make sure that the project still builds

14. Create a new file called call_kernel.cu

15. Add the following to the file call_kernel.cu

#include <cuda_runtime_api.h>
#include "main.h"

// includes, kernels
#include <kernel.cu>

void call_kernel_power(int *base, int *n, int *output, int elementCount) {

    int *dev_base, *dev_n, *dev_output;
    int gridX = (elementCount+ThreadsPerBlock-1)/ThreadsPerBlock;

    cudaMalloc( (void**)&dev_base, elementCount * sizeof(int) );
    cudaMalloc( (void**)&dev_n, elementCount * sizeof(int) );
    cudaMalloc( (void**)&dev_output, elementCount * sizeof(int) );

    cudaMemcpy( dev_base, base, elementCount * sizeof(int), cudaMemcpyHostToDevice);
    cudaMemcpy( dev_n, n, elementCount * sizeof(int), cudaMemcpyHostToDevice);

    power<<<gridX,ThreadsPerBlock>>>(dev_base, dev_n, dev_output, elementCount);

    cudaMemcpy( output, dev_output, elementCount * sizeof(int), cudaMemcpyDeviceToHost);

    cudaFree( dev_base );
    cudaFree( dev_n );
    cudaFree( dev_output );
}

16. Create a new header file called main.h

17. Add the following content to the file

#define ThreadsPerBlock 128

#include <stdio.h>

void call_kernel_power(int *base, int *n, int *output, int elementCount);

18. Update the main.cpp file with the following:

#include "main.h"

#define N   80000

int main() {

    printf("Power Cuda kernel test from C++\n");
    printf("Testing %d elements\n", N);

    int base[N], n[N], output[N];

    for(int i = 0; i < N; i++) {
          base[i] = 2;
          n[i] = i+1;
          output[i] = 0;
      }

    call_kernel_power(base, n, output, N);

    for(int i = 0; i < N && i < 15; i++) {

          printf("%d^%d = %d\n", base[i], n[i], output[i]);

      }

      printf("Done\n");

    return 0;
}

19. That should be it…

You now have a template that you can work from. When you build the file and run it, I get this output:

NHibernate CompositeId and GetHashCode SELECT N+1 problem

I was looking in my NHProf log and got a lot of SELECT request. To many and I thought I was having a SELECT N+1 problem.

Using the “Stack Trace” in NHprof I identified a call being made to GetHashCode() of the class (MyClass) holding the composite id. MyClass looked like this:

public class MyClass
{
    public virtual Key1Class Key1 { get; set; }
    public virtual Key2Class Key2 { get; set; }

    /*
     * Other virtual properties
     * ...
     */

    public override int GetHashCode()
    {
        unchecked
        {
            return ((Key1 != null ? Key1.GetHashCode() : 0) * 397) ^ (Key2 != null ? Key2.GetHashCode() : 0);
        }
    }
}

Note that Key1 and Key2 are both references, and please take a closer look at the GetHashCode method.

The problem

The problem I was facing… NHibernate called the GetHashCode() of MyClass when doing the query. And the call to MyClass‘s GetHashCode method subsequentily called GetHashCode() of both the Key1 and Key2 classes.

Key1 and Key2 were loaded as proxies and hence a call to something else than theirs Id resulted in a select being fired.

The solution

The solution was to rewrite the GetHashCode method to this:

public override int GetHashCode()
{
    unchecked
    {
        return ((Key1 != null ? Key1.Id : 0) * 397) ^ (Key2 != null ? Key2.Id : 0);
    }
}

Note that both Id’s of KeyClass1 and KeyClass2 are integers.

Now a call to MyClass GetHashCode does not result in individual SELECT statements being fired, even though the Keys area lazy loaded.

IDrive .NET backup library

What does IDriveLibrary it do?

IDrive EVS gives you 5GB of free backup space on the IDrive platform. IDrive EVS makes it possible via a kind of web http interface to upload files and more.

The library I have written is called IDriveLibrary and can be found here https://bitbucket.org/ovesen/idrivelibrary. My thought about the project, was to be able to write a backup routine in C# for my servers.

Unfortunately IDrive EVS seems buggy and slow. I have in an earlier review of backup providers deemed IDrive buggy (http://blog.ovesens.net/2010/01/test-of-online-backup-providers/). Nothing has changed.

During the development of the library I have had to contact IDrive 5 times, either because documented features simply did not work or because of errors in the documentation.

Sample TwitterTest authentication and console application

How to make your application or service tweet

My case: I have a windows service and I want to be able to know the internal state of this service and e.g. when an exception was thrown. The obviusl answer for this is logging. But I already use Log4Net, and I want to be able to access this info without having to remote to the server.
Others have solved this issue by making e.g. Kayak part of the service and expose that state as JSON. That do sound tempting, however I am more to some kind of logging mechanism. I know that there exists Twitter Appenders for Log4Net, but by using something existing I will not learn anything :)
So what I want to implement is custom Twitter logging, and here is how.

  1. Register your application with Twitter (http://dev.twitter.com/apps/new)
  2. Save your consumerKey and consumerSecret
  3. Run TwitterAuthenticator to retrieve the Access token. Save Token and TokenSecret
  4. Use the TweetConsole as a sample for Tweeting. Use the consumerKey, consumerSecret and the Access Token details

That is it…

You can find the source code here: https://bitbucket.org/ovesen/twittertest

Using Multiple Configuration files for NHibernate (HybridSessionBuilder)

 

I have been using Castle Windsor and NHibernate for some months now, so I am a bit new and still learning. But then again, it is learning developers like me, that asks all the questions.

This post is also written because of a simple question: Does the HybridSessionBuilder support multiple configurations?

Why this question? Well, I have a project where some repositories uses one database and some other uses another database.

Original implementation

My little journey started out with my looking original implementation of the HybridSessionBuilder (HSB). You can read about the original implementation here:

http://jeffreypalermo.com/blog/use-this-nhibernate-wrapper-to-keep-your-repository-classes-simple/

But as you can see:image

The original implementation used a static session factory and session, so this is does not support multiple configurations.

CodeCampServer/Tarantino

I then found out that the CodeCampServer also was using the HSB. Hoping to find an implementation that supported my request, I then started to read the source code.

But by viewing the source code for CodeCampServer I soon found out that the code, for the HSB, had been updated to support multiple factories and sessions. Yeeeaaaaa :)

Furthermore I found out that the HSB now is a part of a collection of libraries called Tarantino.

I was happy, the HybridSessionBuilder now supported my requests, and it was ready to be used in my projects. I thought.

Test web application

image Then I build a small test web application project. It used Castle Windsor (for dependency injection) and NHibernate. I quickly configured Castle Windsor and made two NHibernate configuration files.

nhibernate.cfg.xml used this connection string:

Data Source=localhost\sqlexpress;Initial Catalog=DB1;Integrated Security=true

nhibernate2.cfg.xml used this connection string:

Data Source=localhost\sqlexpress;Initial Catalog=DB2;Integrated Security=true

My windsor.config.xml looks like this:

<component id="firstRepository" type="HybridSessionBuilderTest.FirstRepository, HybridSessionBuilderTest">
  <parameters>
    <ConfigurationFile>hibernate.cfg.xml</ConfigurationFile>
  </parameters>
</component>
 
<component id="secondRepository" type="HybridSessionBuilderTest.SecondRepository, HybridSessionBuilderTest">
  <parameters>
    <ConfigurationFile>hibernate2.cfg.xml</ConfigurationFile>
  </parameters>
</component>

It basically just configures the two repositories with each own configuration.

But to my surprise, it did not work. I made a small page that prints the connection string of each of the repositories:

image

Only the default NHibernate configuration file was used.

Bug or feature?

That was not the output I was looking for, so to find the problem, I did some more test and dug into the code.

I was expecting an object structure like this:

image

But I got something like this:

image  I identified the following method to be the central, when the session was instantiated:

   1: private ISession getExistingOrNewSession(ISessionFactory factory, string configurationFile)
   2:         {
   3:             if (HttpContext.Current != null)
   4:             {
   5:                 var session = GetExistingWebSession();
   6:  
   7:                 if (session == null || !session.IsOpen)
   8:                 {
   9:                     session = openSessionAndAddToContext(factory, configurationFile);
  10:                 }
  11:  
  12:                 return session;
  13:             }
  14:  
  15:             var currentSession = _currentSessions.ContainsKey(configurationFile)
  16:                                     ? _currentSessions[configurationFile]
  17:                                     : null;
  18:             if (currentSession == null || !currentSession.IsOpen)
  19:             {
  20:                 _currentSessions[configurationFile] = OpenSession(factory);
  21:             }
  22:  
  23:             return _currentSessions[configurationFile];
  24:         }

Patch/solution

The problem I identified is line #5. It returns the existing session based on the default configuration, no matter what you specify in the configurationFile parameter.

I changed it to:

   1: var session = GetExistingWebSession(configurationFile);

 

And tadaaa:

image

The second repository now uses the second configuration file :)

Conclusion

You might ask, why I spend time writing a blog post describing this little change to the HybridSessionBuilder.

First of all, having a HSB that can use multiple configurations is rather important to me and the projects I am working on.

Second of all, as NHibernate and Castle Windsor is still a bit new to me, one of the best ways to learn its capabilities and how to use it, is to dig in deep. And that is exactly what I have done.

And BTW – I have not just learned about HSB, but also CodeCampServer, Tarantino, S#arp and how it handles multiple configurations, and how it all works together.

NHibernateSessionModule

Please note. Before my change is used, you need to also update the NHibernateSessionModule. It does currently not dispose all the sessions that are opened.

Multiple Configurations for NHibernate (HybridSessionBuilder)

 

I have been using Castle Windsor and NHibernate for some months now, so I am a bit new and still learning. But then again, it is learning developers like me, that asks all the questions.

This post is also written because of a simple question: Does the HybridSessionBuilder support multiple configurations?

Why this question? Well, I have a project where some repositories uses one database and some other uses another database.

Original implementation

My little journey started out with my looking original implementation of the HybridSessionBuilder (HSB). You can read about the original implementation here:

http://jeffreypalermo.com/blog/use-this-nhibernate-wrapper-to-keep-your-repository-classes-simple/

But as you can see:image

The original implementation used a static session factory and session, so this is does not support multiple configurations.

CodeCampServer/Tarantino

I then found out that the CodeCampServer also was using the HSB. Hoping to find an implementation that supported my request, I then started to read the source code.

But by viewing the source code for CodeCampServer I soon found out that the code, for the HSB, had been updated to support multiple factories and sessions. Yeeeaaaaa :)

Furthermore I found out that the HSB now is a part of a collection of libraries called Tarantino.

I was happy, the HybridSessionBuilder now supported my requests, and it was ready to be used in my projects. I thought.

Test web application

image Then I build a small test web application project. It used Castle Windsor (for dependency injection) and NHibernate. I quickly configured Castle Windsor and made two NHibernate configuration files.

nhibernate.cfg.xml used this connection string:

Data Source=localhost\sqlexpress;Initial Catalog=DB1;Integrated Security=true

nhibernate2.cfg.xml used this connection string:

Data Source=localhost\sqlexpress;Initial Catalog=DB2;Integrated Security=true

My windsor.config.xml looks like this:

<component id="firstRepository" type="HybridSessionBuilderTest.FirstRepository, HybridSessionBuilderTest">
  <parameters>
    <ConfigurationFile>hibernate.cfg.xml</ConfigurationFile>
  </parameters>
</component>
 
<component id="secondRepository" type="HybridSessionBuilderTest.SecondRepository, HybridSessionBuilderTest">
  <parameters>
    <ConfigurationFile>hibernate2.cfg.xml</ConfigurationFile>
  </parameters>
</component>

It basically just configures the two repositories with each own configuration.

But to my surprise, it did not work. I made a small page that prints the connection string of each of the repositories:

image

Only the default NHibernate configuration file was used.

Bug or feature?

That was not the output I was looking for, so to find the problem, I did some more test and dug into the code.

I was expecting an object structure like this:

image

But I got something like this:

image  I identified the following method to be the central, when the session was instantiated:

   1: private ISession getExistingOrNewSession(ISessionFactory factory, string configurationFile)
   2:         {
   3:             if (HttpContext.Current != null)
   4:             {
   5:                 var session = GetExistingWebSession();
   6:  
   7:                 if (session == null || !session.IsOpen)
   8:                 {
   9:                     session = openSessionAndAddToContext(factory, configurationFile);
  10:                 }
  11:  
  12:                 return session;
  13:             }
  14:  
  15:             var currentSession = _currentSessions.ContainsKey(configurationFile)
  16:                                     ? _currentSessions[configurationFile]
  17:                                     : null;
  18:             if (currentSession == null || !currentSession.IsOpen)
  19:             {
  20:                 _currentSessions[configurationFile] = OpenSession(factory);
  21:             }
  22:  
  23:             return _currentSessions[configurationFile];
  24:         }

Patch/solution

The problem I identified is line #5. It returns the existing session based on the default configuration, no matter what you specify in the configurationFile parameter.

I changed it to:

   1: var session = GetExistingWebSession(configurationFile);

 

And tadaaa:

image

The second repository now uses the second configuration file :)

Conclusion

You might ask, why I spend time writing a blog post describing this little change to the HybridSessionBuilder.

First of all, having a HSB that can use multiple configurations is rather important to me and the projects I am working on.

Second of all, as NHibernate and Castle Windsor is still a bit new to me, one of the best ways to learn its capabilities and how to use it, is to dig in deep. And that is exactly what I have done.

And BTW – I have not just learned about HSB, but also CodeCampServer, Tarantino, S#arp and how it handles multiple configurations, and how it all works together.

How will you parallelize your existing codebase? Try R.A.S.P

Original post:

http://www.atalasoft.com/cs/blogs/rickm/archive/2008/12/23/how-will-you-parallelize-your-existing-codebase-try-r-a-s-p.aspx

———————————————————————————————————————-

How will you parallelize your existing codebase? Try R.A.S.P.

There
has been much talk of how we will be writing all of our new code with
parallelization in mind.  However, what of our existing code?  It’s
unlikely that everyone will just suddenly dump decades of existing code
and write everything from scratch.  In this article I’m going to
provide a simple methodology for how we might deal with the ever
building problem of parallelizing our existing mountains of code. 
Comments and contributions are welcome.

 

Methodologies of the Past

From
the STL to .NET, the frameworks we have constructed our applications
around have been heavily dependent on the idea of an application having
a single thread.  Given that the foundation of what we have all been
using for a very long time was constructed around this preconception,
it’s unreasonable to expect that much of our existing code will ever be
fully parallel.  Even those that wrote code on top of a thread safe
framework may find that years of patches and and poor design decisions
make ground up parallelization impossible.

If we set our
expectations reasonably, we see that we should instead focus on
leveraging parallelism to improve the performance of the slowest parts
of our software.  From this viewpoint, parallelization is an
optimization problem.  Like all optimization, the difficulty of
parallelizing code will have much to do with the methodologies which
were used to write it.  

Of course, object-oriented code being written with modern S.O.L.I.D. principles
and will be easier to parallelize than older procedural code.  At the
same time, a poorly organized codebase or poorly written code will
always make change difficult and so also hard to parallelize.  This is
why well written code is worth the investment.  We will see the
investment paying off in spades for the companies who have bothered to
care about code quality.  Others who find they have spaghetti code
under the hood will find they will need to deeply segregate and
modularize before parallelization is possible.

 

A Methodology for Revisiting the Past

In
most cases it would be a poor choice to implement your own threading
API.  Efficient and easy to use parallelization APIs are coming to (or
already part of) every commonly used language and framework.  Most of
these APIs are not only built on top of years of research,
they also have been written and debugged by a large number of people
with specific expertise.  These APIs are a godsend because they will
allow most developers to parallelize existing software with a minimum
amount of pain.  The parallelization of existing code bases will be
much the same as any other kind of performance tuning. 

The
key will be using a profiler to identify places in the code that would
be sped up by parallelization and leveraging these new APIs to take
advantage of the available hardware.  The exciting part is that this
can be done with any existing profiler and many existing APIs.  The
unfortunate part is that because memory sharing is such a big issue,
parallelization requires a degree of separation beyond other types of
optimization and so is likely to require some amount of refactoring.

Not
all types of performance problems are conducive to being solved by
parallelization, careful evaluation of the problem at hand is
required.  Also, as with anything that requires significant code
change, building a solid test fixture is key to introducing as few bugs
as possible.  By leveraging the ideas of avoiding premature optimization, pragmatic unit testing, using existing APIs, and mindful refactoring it will be possible to introduce parallelization into many already existing projects with a manageable amount of risk.

 

What is RASP?

While not included in the acronym, the first step in any kind of optimization is profiling
Before you can begin to parallelize your code, you must determine where
the bottlenecks might be.  A broadly defined list of parallelizable
things to look for would be, to quote Rich Hickey,
“independent data/work, moderate-to-course-grained work units and/or
complex coordination logic that would be simplified with threads”.  A
couple quick examples of low hanging fruit to be on the lookout for
would be slow iterative loops and blocking I/O.  It is important to
note that as a general guideline it would be wrong to parallelize
anything if it would not significantly increase the speed of your
software.

For each of the bottlenecks found while profiling, parallelization is best separated into four steps:

Review: Review code to determine if it is a good candidate for parallelization.
Anchor: Create a unit test fixture to ensure that the behavior of the to be parallelized code does not change.
Separate: Ensure that the to be parallelized code has no shared memory constraints, and if it does, factor them out if possible.
Parallelize: Minimally refactor for parallelization while leveraging an existing API to do the heavy lifting.

As
the specifics of what each of these would entail depends greatly on
exactly which platform and language is in use, I will not go into them
deeply now.  Overall, it’s a simple methodology but I think both
sufficient for the task at hand and broadly applicable. 

 

Conclusion

Review,
Anchor, Separate, Parallelize.  It’s not intended to be a difficult
concept but instead to provide a simple path to parallelization.  I
would be very interested in hearing about any opinions on what RASP
might be missing or how it may be better clarified.  While I didn’t
have time to discuss them deeply in this post, parallelization patterns
are also a key concept in using RASP as if you can’t easily identify
what can be parallelized than it would be impossible to use any
parallelization methodology.  In the future I hope flush out RASP
further as well as discuss parallelization patterns in depth.