Wednesday 1 October 2008

Dublin - it's not a city it's a new Microsoft codename :)

It looks like Windows Server 2008 needs to be extended to be able to host WCF 4.0 and WF 4.0 type of applications. The codename of this set of extensions is called Dublin. Dublin is the capital of Ireland and I'm curious if the author of the codename knew about that. Anyway, you can find more details about the new platform here. At the first glance it looks interesting but still there is no mention of publish/subscribe type of messaging. I hope I've missed something as this a huge gap in the communication framework that Microsoft provides.
Tags: , ,

Friday 29 August 2008

27 stack frames on 32 bit OS and 22 stack frames on 64 bit OS

64 bit JIT and 32 bit JIT are 2 very different beasts which is what one would suspect. But still I was rather surprised that 64 bit JIT was able to reduce the call stack of a call by 20% in comparison with 32 bit JIT. A piece of my code crawls the call stack to gather some runtime characteristics and it broke on a 64 bit machine. That's how I started a small investigation and found this difference. In most cases it's not a good idea to rely on the way JIT works as this is subject to change but from time to time there is simply no other way. If you look for more information about 32/64 JIT you can check this blog post of Scott Hanselman which is an overview with links to other Microsoft bloggers that dive much deeper into details.

Tags: ,

Thursday 28 August 2008

Henrik Kniberg and his list of top 10 ways of screwing Scrum

The Agile methodologies of today hesitate to state that there are practices that work in every single environment but from my experience there is at least one that works in 99,999999% of cases :). It's called Team. As long as people work as a team and not as a group of individuals everything is possible and (nearly) every problem can be overcome. Just my 2 cents.
Watch this video to see Henrik talk in a compelling way about what you should avoid when you practice Scrum. Every second slide is accompanied by a real-life story which makes this 1,5h presentation fly in no time.

Tuesday 22 July 2008

Hidden Power


I know it's not a great post but task manager simply made my day a few weeks ago :).

Monday 30 June 2008

A setting that can boost performance of any heavily network-dependent application

By default .NET allows only 2 connections to a given network address per AppDomain. In most cases this works fine but if your app makes a couple of dozens network calls a second then this value might be too small and it might actually cause a bottleneck that is very hard to diagnose. I decided to increase the value of this setting to the value that is recommend by Microsoft (number_of_cores x 12) and one of my services speeded up significantly. Having said that I have to stress that there is no guarantee this setting will work in your case. Remember, measure, measure and once again measure when you optimize.
And the setting is:
 <system.net>
<connectionManagement>
<add address="*" maxconnection="96"/>
</connectionManagement>
</system.net>

WCF service hangs - the pool of available sessions might have been exhausted

By default WCF accepts only 10 concurrent sessions which is not enough for most applications. If there are 20 clients then 10 of them will be blocked until their requests time out. You can always increase the number of sessions











but this is going to work as long as you control all the clients. If you don't then some of them might not properly close their sessions which next might lead to a resource leak on the service side. This is not an easy problem to solve unless you are ready and able to abandon sessions.
From my perspective it's much more important to know that a service is about to reach it's limit of sessions or it is hung because it already has reached it. Unfortunately as far as I know there is no out of the box way of monitoring the number of active sessions per WCF service using Performance Counters. This leaves us with only one option, namely we have to write a custom performance counter on our own. This can be done as a WCF extension that implements IChannelInitializer and  IInputSessionShutdown interfaces.
When a service seems to be frozen the story is not that simple as there might be dozens of reasons why it is in such a state. The only way I know to prove or disprove that the problem is related to the fact that there are no available sessions is to create a memory snapshot of the process where the service is hosted in and use Debugging Tools for Windows to check the state of all ServiceThrottle objects.
The following steps shows how to carry out such an investigation :) :
0. Install Debugging Tools for Windows and copy C:\Windows\Microsoft.NET\Framework\v2.0.50727\sos.dll (extension for .NET debugging) to the folder where Debugging Tools for Windows are installed.
1. Find the id of the process that is hosting the WCF service we are examining. In my case it is one of the IIS worker processes.

2. Create a memory snapshot using the adplus script which is part of Debugging Tools for Windows. By default adplus creates all snapshots under the folder where the Debugging Tools for Windows is installed.

3. Launch windbg.exe (the same location as adplus) and open the memory snapshot.

4. Type .load sos and press enter to load sos.dll into windbg.

5. Type !dumptheap -type ServiceThrottle -short in the command line to list all objects of type ServiceThrottle that exist on the managed heap. By the list of all objects I mean a list of their addresses in memory.

6. For each address on the output list carry out steps 7 - 8
7. Type !do address of the object to see what's inside of it.

8. The ServiceThrottle object has bunch of fields but only one of them which is called sessions is interesting from our perspective. Type !do address of the sessions field to see what's inside of it.

If you find a sessions field that has count and capacity fields set to the same value then you know that the pool of available sessions has been exhausted. If you can't find it then at least you know that there is something else wrong with your service.
Happy debugging :)

Saturday 7 June 2008

Windows XP 64 bit - a relatively unknown but great operating system

A few weeks ago I got a new beefy machine and the guy who was setting it up asked me if I wanted to try XP 64 bit. At the beginning I didn't see any compelling reason to move to a new platform but then I realized that XP 32 bit is not going to take full advantage of 4GB of RAM and I decided to give it a try. I'm glad that I've made that decision as it turned out that XP 64 bit is superior to XP 32 bit as a foundation for software development.
  • XP 64 bit can leverage all 4GB of RAM which allows me to run 4/5 instances of VS.NET, SQL Server, Outlook, Firefox and a dozen of background applications without any noticeable performance degradation.
  • XP 64 bit shares core components with Windows Server 2003 which would explain its great performance and reliability.
  • XP 64 bit comes with IIS 6.0. I suppose I don't have to explain why this is big. If my point is not clear to you, just compare IIS 5.1 and IIS 6.0 and you will immediately understand what I'm talking about.
I asked quite a few software developers about XP 64 bit and none of them has ever considered installing it. It looks like the Marketing Department at Microsoft is very busy coming up with even longer names of Microsoft products :) and they have no time to market one of the best systems they've released.

Thursday 29 May 2008

WeakEvent - you wish it was here

At the first glance .NET events are an easy and harmless way to decouple components. The former statement is true but the latter is not. The reason is that whenever an instance of a class subscribes to an event published by another class a strong link between these two is established. By strong link I mean that the subscriber(listener) won't get garbage-collected as long as the publisher is alive. The only way to break that link is to unsubscribe from the event which might be easily omitted as the link is not explicit. Additionally there are cases when explicit(deterministic) cancellation of subscription is impossible. If additionally the publisher is a long living object than we might face a memory leak. In ideal world there would be a way of specifying that a subscription is weak which would mean that if the subscription is the only link to an object then the object can be garbage-collected and the subscription can be deleted. .NET does not provide that facility out of the box but fortunately it provides building blocks that in most cases let us build a good enough solution. The idea is to intercept calls that add and remove subscribers from/to an event and create a weak link between subscribers and publisher instead of the default, strong one.
When you define an event you don't have to write add/remove methods on your own because C# compiler generates them automatically. Basically the following code snippet:

public event EventHandler <EventArgs> MyEvent;

is just "syntactic sugar" that C# compiler transforms to much more verbose form. You can find detailed description of this process in CLR via C#  by Jeffrey Richter. From our perspective the most important thing is that we can overwrite the default behavior of the compiler and inject our own implementation of Add and Remove methods in a way that is completely transparent to subscribers. SomeClass and Subscriber classes show how it can be done. Don't worry about WeakEvent<T> class as it will be explained later.

public class SomeClass
{
private WeakEvent <EventArgs> myWeakEvent;
public event EventHandler <EventArgs> MyWeakEvent
{
add
{
myWeakEvent.Add(value);
}

remove
{
myWeakEvent.Remove(value);
}
}
private void SomeMethodThatNeedsToRiseMyWeakEvent()
{
OnMyWeakEvent(new EventArgs());
}

protected void OnMyWeakEvent(EventArgs args)
{
myWeakEvent.Invoke(args);
}
}
public class Subscriber
{
private SomeClass someClass;
public Subscriber()
{
someClass = new SomeClass();
someClass.MyWeakEvent += Method;
}

private void Method(object sender, EventArgs e)
{
}
}


Add and Remove methods take a delegate as the input parameter. Every .NET delegate is an object with 2 properties. One of them is a reference to the target of the delegate(the object the delegate will be called on) and the second one is a description of the method which is provided as an instance of System.Reflection.MethodInfo class. Static delegates have the target property set to null. The target field is the root of all evil as it keeps the subscriber alive(it is a strong reference to the object the delegate will be called on). Fortunately .NET framework provides a class that can act as man in the middle between the method and its target which lets us break the direct link between them.
The class that makes it possible is called (no surprise) System.WeakReference. An instance of System.WeakReference class keeps a weak reference(instead of strong) to the object that is passed to its constructor. The weak reference can be transformed into the strong reference by accessing its Target property and storing its value in an ordinary variable. In this way we resurrect the object. If the object is already garbage-collected then the property returns null. All aforementioned functionality is encapsulated in a custom class that I called WeakDelegate.

internal class WeakDelegate
{
private WeakReference target;
private MethodInfo method;

public object Target
{
get
{
return target.Target;
}
set
{
target = new WeakReference(value);
}
}

public MethodInfo Method
{
get { return method; }
set { method = value; }
}
}

WeakEvent<T> is a class that takes advantage of WeakDelegate class to solve the problem outlined in the first paragraph. Its below implementation is rather straightforward but 2 pieces of code might need some explanation. The first one is inside Invoke method. Internally we store instances of WeakDelegate class which means that we can not invoke them directly and every time one of them needs to be executed we have to assemble an instance of  System.Delegate class. I don't know if the way the code creates delegates is the fastest one but I measured the execution time of that statement and the average time was 0.005384 ms per delegate which is fast enough for me. The second one is related to the fact that the locking is done in a way that prevents threads from waiting forever. If a thread can't enter the critical section within 15 seconds then it throws an exception. The rationale behind that approach is explained here.

public class WeakEvent <T> where T : EventArgs
{
private readonly List <WeakDelegate> eventHandlers;
private readonly object eventLock;

public WeakEvent()
{
eventHandlers = new List <WeakDelegate>();
eventLock = new object();
}

public void Invoke(T args)
{
ExecuteExclusively(delegate
{
for (int i = 0; i < eventHandlers.Count; i++)
{
WeakDelegate weakDelegate = eventHandlers[i];
// don't move this line to the ELSE block
//as the object needs to be resurrected
Object target = weakDelegate.Target;

if (IsWeakDelegateInvalid(target, weakDelegate.Method))
{
eventHandlers.RemoveAt(i);
i--;
}
else
{
Delegate realDelegate = Delegate.CreateDelegate(typeof(EventHandler <T>),
target, weakDelegate.Method);
EventHandler <T> eventHandler = (EventHandler <T>)realDelegate;
eventHandler(this, args);
}
}
});
}

public void Remove(EventHandler <T> value)
{
ExecuteExclusively(delegate
{
for (int i = 0; i < eventHandlers.Count; i++)
{
WeakDelegate weakDelegate = eventHandlers[i];
Object target = weakDelegate.Target;

if (IsWeakDelegateInvalid(target, weakDelegate.Method))
{
eventHandlers.RemoveAt(i);
i--;
}
else
{
if (value.Target == target && value.Method == weakDelegate.Method)
{
eventHandlers.RemoveAt(i);
i--;
}
}
}
});
}

public void Add(EventHandler <T> value)
{
ExecuteExclusively(delegate
{
RemoveInvalidDelegates();

WeakDelegate weakDelegate = new WeakDelegate();
weakDelegate.Target = value.Target;
weakDelegate.Method = value.Method;

eventHandlers.Add(weakDelegate);
});
}

private void RemoveInvalidDelegates()
{
for (int i = 0; i < eventHandlers.Count; i++)
{
WeakDelegate weakDelegate = eventHandlers[i];

if (IsWeakDelegateInvalid(weakDelegate))
{
eventHandlers.RemoveAt(i);
i--;
}
}
}

private void ExecuteExclusively(Operation operation)
{
bool result = Monitor.TryEnter(eventLock, TimeSpan.FromSeconds(15));

if (!result)
{
throw new TimeoutException("Couldn't acquire a lock");
}

try
{
operation();
}
finally
{
Monitor.Exit(eventLock);
}
}

private bool IsWeakDelegateInvalid(WeakDelegate weakDelegate)
{
return IsWeakDelegateInvalid(weakDelegate.Target, weakDelegate.Method);
}

private bool IsWeakDelegateInvalid(object target, MethodInfo method)
{
return target == null && !method.IsStatic;
}
}



You might have noticed that there is some housekeeping going on whenever one of Add, Remove or Invoke methods is called. The reason why we need to do this is that WeakEvent<T> keeps a collection of WeakDelegate objects that might contain methods bound to objects(targets) that have been garbage-collected. In other words we need to take care of getting rid of invalid delegates on our own. Solutions to this problem can vary from very simple to very sophisticated. The one that works in my case basically scans the collection of delegates and removes invalid ones every time a delegate is added, removed or the event is invoked. It might sound like overkill but it works fine for events that have around 1000-5000 subscribers and it's very simple. You might want to have a background thread that checks the collection every X seconds but then you need to figure out what is the value of X in your case. You can go even further and keep the value adaptive but then your solution gets even more complicated. In my case the simplest solutions works perfectly fine.
Hopefully this post will save someone an evening or two :).

Monday 28 April 2008

Machines are predictable, people are not

 

I suppose we would all agree with that and that's why smart people try to develop processes to make us more predictable. On the other hand nobody likes being constrained by anything and especially a process. Some people call this kind of lack of structure freedom, some call it chaos :). From my experience a bit of process might actually help a lot whereas a complete lack of it leads sooner or later to a disaster. Scrum is one of the approaches that let people develop software in a predictable way and that's the topic of the next MTUG event (29th April) that I'm not going to miss. See you there.

Tags: , ,

Wednesday 16 April 2008

Never ever synchronize threads without specifying a timeout value

Whenever there is more then one thread and more then one shared resource there must be some synchronization in place to make sure that the overall state of the application is consistent. Synchronization is not easy as it very often involves locking which very easily might lead to all sorts of deadlocks and performance bottlenecks. One of the ways of keeping out of trouble is to follow a set of guidelines. I can list at least a few sources of information worth getting familiar with:
And of course :) my two cents or rather lessons I've learnt writing and/or debugging multithreaded code:
  1. Minimize locking  -  Basically lock as little as possible and never execute code that is not related to a given shared resource in its critical section. The most problems I've seen were related to the fact that code in a critical section did more then it was absolutely needed.
  2. Always use timeout - Surprisingly all synchronization primitives tend to encourage developers to use overloads that never time out. One of the drawbacks of this approach is the fact that if there is a problem with a piece of code then an application hangs and nobody has an idea why. The only way to figure that out is to create a dump of a process (if you are lucky enough and the process is still hanging around) and debug it using  Debugging Tools for Windows. I can tell you that this is not the best way of tackling production issues when every minute matters. But if you use only API that lets you specify a timeout then whenever a thread fails to acquire a critical section within a given period of time it can throw an exception and it's immediately obvious what went wrong.

    Default
    Preferred
    Monitor.Enter(obj)
    Monitor.TryEnter(obj, timeout)
    WaitHandle.WaitOne()
    WaitHandle.WaitOne(timeout, context)

    The same logic applies to all classes that derive from WaitHandle: Semaphore, Mutex, AutoResetEvent, ManualResetEvent.
  3. Never call external code when in a critical section - Calling a piece of code that was passed to a critical section handler from outside is a big risk because there is always a good chance that at the time the code was designed nobody even thought that it might be run in a critical section. Such code might try to execute a long running task or to acquire another critical section. If you do something like that you simply ask for trouble :)
I suppose it's easy to figure out which one has bitten me the most :).

Wednesday 26 March 2008

MIX summary in Dublin

It looks like there will be a micro MIX like event in Dublin in May - http://visitmix.com/2008/worldwide/. It might be interesting.

Tags: ,

Sunday 24 February 2008

There is no perfect job

I suppose we all know that there are always some "ifs" and "buts". Edge Pereira wrote a blog post about a few of them that are related to human-human interaction. If I had to choose a single sentence from his post I would go for this one: "if an employee does not know the reason of his daily work, he will never wear the company's jersey". Needles to say I totally agree with the whole post.

Tags: ,

Friday 15 February 2008

ReSharper 4 - nightly builds available at last

At this stage I nearly refuse writing code without ReSharper. I know it's bad but that's not the worst addiction ever :). Fortunately, JetBrians decided to release nightly builds of ReSharper 4 to public. Sweet.

Tuesday 12 February 2008

C# generics - parameter variance, its constraints and how it affects WCF

CLR generics are great and there is no doubt about that. Unfortunately, C# doesn't expose the whole beauty of it because all generic type parameters in C# are nonvariant though from CLR point of view they can be marked as nonvariant, covariant or contravariant. You can find more details about that topic here and here. In short the "nonvariant" word means that even though type B is a subtype of type A then SomeType<B> is not a subtype of SomeType<A> and therefore the following code won't compile:

List <String> stringList = null;
List <object> objectList = stringList; //this line causes a compilation error

Error 1 Cannot implicitly convert type 'System.Collections.Generic.List<string> to 'System.Collections.Generic.List<object>. An explicit conversion exists (are you missing a cast?)

The generics are all over the place in WCF and you would think that this is always beneficial to all of us. Well, it depends. One of the problems I noticed is that you can not easily handle generic types in a generic way. I know it does not sound good :) but that's what I wanted to say. The best example is ClientBase<T> that is the base class for auto generated proxies. VS.NET generates a proxy type per contract(interface) which might lead to a situation where you need to manage quite a few many different proxies. Let's assume that we use username and password as our authentication method and we want to have a single place where the credentials are set. The method might look like the one below: public void ConfigureProxy(ClientBase<Object> proxy) {     proxy.ClientCredentials.UserName.UserName = "u";     proxy.ClientCredentials.UserName.Password = "p"; } Unfortunately we can't pass to that method a proxy of type ClientBase<IMyContract> because of nonvariant nature of C# generics. I can see at least two options how to get around that issue. The first one requires you to clutter the method with a generic parameter despite the fact that there is no use of it.

public void ConfigureProxy <T>(ClientBase <T> proxy) where T : class   
{
proxy.ClientCredentials.UserName.UserName = "u";
proxy.ClientCredentials.UserName.Password = "p";
}
You can imagine I'm not big fun of this solution. The second one is based on the idea that the non-generic part of the public interface of ClientBase class is exposed as either a non-generic ClientBase class or a non-generic interface IClientBase. Approach based on a non-generic class:

public abstract class ClientBase : ICommunicationObject, IDisposable   
{
public ClientCredentials ClientCredentials
{
//some code goes here
}
}

public abstract class ClientBase <T> : ClientBase where T : class
{
}

Approach based on a non-generic interface:

public interface IClientBase : ICommunicationObject, IDisposable 
{
ClientCredentials ClientCredentials { get; }
}

public abstract class ClientBase <T> : IClientBase where T : class
{
}

Having that hierarchy in place we could define our method in the following way:


public void ConfigureProxy(ClientBase/IClientBase proxy)
{
proxy.ClientCredentials.UserName.UserName = "u";
proxy.ClientCredentials.UserName.Password = "p";
}

Unfortunately WCF architects haven't thought about that and a non-generic ClientBase/IClientBase class/interface doesn't exist. The interesting part of this story is that the FaultException<T> class does not suffer from the same problem because there is a non-generic FaultException class that exposes all the non-generic members. The FaultException<T> class basically adds a single property that returns the detail of the fault and that's it. I can find more classes that are implemented in exactly the same way FaultException<T> is implemented. It looks like ClientBase<T> is the only one widely used class that breaks that rule. I would love to see this inconvenience fixed as an extension of C# parameter variance.

Saturday 9 February 2008

Saturday 2 February 2008

Looking for Quality not Quantity

I'm sure you've already heard opinions that the wide opening of  weblogs.asp.net might not be a good thing because it lowers the overall quality of the portal. I can see more often then before that people are leaving that site. I'm not saying that the aforementioned change caused that but something is going on. 
When I started blogging I wanted to have a blog on weblogs.asp.net  but the admin replied to me that they didn't have any "free slots". At that time I was a little bit upset about that but now I know that his decision was right. Basically in most cases before you are able to provide interesting content you need to familiarize yourself with all the stuff that's already out there. Reinventing the wheel is not interesting and learning (climbing predecessors' shoulders) is not easy.
Current status: Climbing :).

Wednesday 30 January 2008

AccuRev - another story how to screw UI

A few days ago I found an article about new AccuRev plugin for VS 2005 and by the way I decided to describe the pain I've experienced using AccuRev. In general I love AccuRev or to be more precise I love its back-end. The whole AccuRev architecture is based on streams that are structured as a tree. Every stream except the root has a single parent stream and 0 or many child streams. Click at the image to see more details.

A child stream inherits content from its parent which makes branching/merging a non-issue. Let's say you have the production version of a product in the main stream and all the new features are being implemented as child streams. Whenever there is a code change (bug fix, small improvement) in the main stream all child streams get it automatically. Developers don't even have to think explicitly about merging. They just update their workspace, get changes from the parent stream and resolve simple conflicts and that's it. I said simple conflicts because the more often you merge the less time you spend on it. Additionally, in most cases all inherited changes are incorporated seamlessly because AccuRev does a really good job at merging.
If you need to include an external dependency (which is exposed as a stream as well) you just need to create a link between your stream and the stream where the dependency is. This means that if you have a data access component then the New Feature 1 stream can consume version 1.0 of it whereas the New Feature 2 stream can upgrade to version 2.0 in complete isolation. As you can see AccuRev is powerful and there is no doubt about that.
The only problem is that all described features are exposed to the end user by a crappy UI. The UI has been written in the way that makes it useless in so many cases that it's not even funny. I'm going to show a few examples how you must not write UI. All of them made the AccuRev adoption in my company very hard. Fortunately we have a brilliant build team which basically created a new web based UI that sits on top of AccuRev and hides many of the "innovative" ideas you will see.
The order of the list does not matter because different people hate passionately different AccuRev UI "features". The list is based on version 4.5.4 of AccuRev.

  1. UI is based on Eclipse which is written in Java. I have nothing against Java but I haven't seen a really good UI written in that language. The AccuRev UI is not as responsive as the average native Windows app. From time to time it starts rendering everything in gray. Click at the image to see more details.
  2. AccuRev comes with its own vocabulary that describes the source code management. 
    Common term AccuRev term
    update update & populate
    check in/commit promote
    delete defunct
    move cut & paste
    conflict overlap
  3. In order to make sure that you've got all recent changes you need to update and populate your workspace. Of course there is no single button that does both of them in one go. What is more both actions are launched in two completely different ways. The update button is the best hidden button I've ever seen. Nearly every single developer that starts using AccuRev can not find it. Note that the update action is accessible neither from the main menu nor the context menu.
  4. Whenever AccuRev can not update your workspace because there is something wrong with it you get a cryptic error message. In most cases it is the same error message. Click at the image to see more details.
  5. When someone promotes changes to the parent stream then all children get it by inheritance. From time to time the changes introduce conflicts that need to be resolved. The problem is that in order to find them you need to look for them using two different searches - overlap and deep overlap. I know why there are two different types of them but I don't get why this is so explicitly exposed to the end user.
  6. As I said before you can include other streams into you stream. The problem is that the window that lists all available streams is just a list view without any way of filtering items in the current view. I can spend 15 minutes trying to find the stream I'm interested in. Click at the image to see more details.
  7. Whenever you edit a text box the Ctrl-Z shortcut does not work.
  8. Let's say you are done with your task and you want to promote all your changes. In order to do this you need to remember to execute 3 different searches(external, modified, pending) to find all your changes. Again there is no single search that can do this for you. Needless to say that you can not define your own searches. 
  9. There are always some files that are part of your workspace but you never want to add them to the repository. The obj and bin folders are a good example. Unfortunately the AccuRev UI does not let you exclude neither files nor folders. Instead you need to create an environment variable where you specify all patterns for files and folders that AccuRev should ignore. What a user friendly solution. Even free TortoiseSvn has that feature built in.
  10. When you want to move a file that is already in the repository you need to right click on it, choose cut, navigate to the new location, right click and choose paste.  Why is there no explicit move command?
  11. Until recently the AccuRev integration with Visual Studio has been very poor and only recently AccuRev has released a plugin that more or less works with VS 2005(what about 2008?). My biggest problem with AccuRev plugin is the fact that from time to time when I compile my solution it does something in the background that causes the compilation process to freeze for a few seconds. From my perspective this is unacceptable behaviour. I don't mind if the plugin does its housekeeping when the IDE is idle but it must not interfere in the compilation process.
I see the current AccuRev UI as a low level API that should be completely hidden and should be used only to build a higher level abstraction that is simple and user friendly. Our build guys should sell their system back to AccuRev because it fixes many of the current AccuRev UI shortcomings. At the end I want to emphasize that as I said before the AccuRev back-end is great but its front-end makes you think that the whole product is half-baked to say the least.
Since recently a happy AccuRev user :).


Saturday 26 January 2008

Irish weather enables people to build energy efficient data centres

Two guys(Tom and Adam) are building a data centre in Cork. To be honest I was surprised that Cork needs a data centre but apparently there is none outside of Dublin and some companies need to use 2 data centres that are at least 30 miles far from each other. The next thing that was kind of new to me is that you can build such a facility in a completely transparent way that your customers know exactly what they pay for. And the last but the least surprise was that Irish weather plus a bit of innovative thinking can give such huge savings in terms of power consumption. From now on if someone complains about Irish weather I will tell him/her the story of that efficient data centre. It might help :).

Tuesday 15 January 2008

LINQ query versus compiled LINQ query

Rico Mariani changed his position a few months ago but he still manages to provide his Performance Quizzes from time to time. Today he explains the difference between LINQ queries and compiled LINQ queries. It's obvious that one should cache as much as possible because in most cases is better to pay a high price once then a slightly reduced one many times. But the interesting part of the post is the way how Microsoft came up with their current implementation of compiled LINQ queries. Nearly every good performance solution is a matter of realizing that being very generic doesn't pay off. Crazy about performance topics? Go and solve Rico latest quiz. Once you are done check if you are right.

Friday 4 January 2008

Ireland is not a hero?

It looks like there will not be a launch event of Visual Studio 2008, Windows Server 2008 and SQL Server 2008 in Dublin. The registration page does not list Ireland and I suppose that this years is someone else turn.