Thursday, June 28, 2012

Basic Threading


A thread is an independent execution path

C# supports parallel execution of code through multithreading.

How Threading Works

Multithreading is managed internally by a thread scheduler, a function the CLR typically delegates to the operating system. A thread scheduler ensures all active threads are allocated appropriate execution time, and that threads that are waiting or blocked do not consume CPU time.
On a single-processor computer : Thread scheduler performs time-slicing — rapidly switching execution between each of the active threads.
On a multi-processor computer: Multithreading is implemented with a mixture of time-slicing and genuine concurrency, where different threads run code simultaneously on different CPUs. It’s almost certain there will still be some time-slicing, because of the operating system’s need to service its own threads — as well as those of other applications.
A thread is said to be preempted when its execution is interrupted due to an external factor such as time-slicing.

When to Use

·         Maintaining a responsive user interface
·         Making efficient use of an otherwise blocked CPU
·         Parallel programming
·         Speculative execution
·         Allowing requests to be processed simultaneously
Thread Pooling
Whenever you start a thread, a few hundred microseconds are spent organizing such things as a fresh private local variable stack. The thread pool cuts these overheads by sharing and recycling threads, allowing multithreading to be applied at a very granular level without a performance penalty.
The thread pool also keeps a lid on the total number of worker threads it will run simultaneously. Too many active threads throttle the operating system with administrative burden and render CPU caches ineffective. Once a limit is reached, jobs queue up and start only when another finishes.
The thread pool starts out with one thread in its pool. As tasks are assigned, the pool manager “injects” new threads to cope with the extra concurrent workload, up to a maximum limit. After a sufficient period of inactivity, the pool manager may “retire” threads if it suspects that doing so will lead to better throughput.
You can set the upper limit of threads that the pool will create by calling ThreadPool.SetMaxThreads; the defaults are:
  • 1023 in Framework 4.0 in a 32-bit environment
  • 32768 in Framework 4.0 in a 64-bit environment
  • 250 per core in Framework 3.5
  • 25 per core in Framework 2.0
Synchronization
Synchronization constructs can be divided into four categories: (Blocking/Locking/Signaling/NonBlocking)
Blocking:
Wait for another thread to finish or for a period of time to elapse.
SleepJoin, and Task.Wait are simple blocking methods.
Locking:
Limit the number of threads that can perform some activity or execute a section of code at a time.
Monitor, Mutex, SpinLock (Exclusive locking)
Semaphore, SemaphoreSlim, Reader/Writer locks (Nonexclusive locking)

Signaling:
These allow a thread to pause until receiving a notification from another.
Event wait handles and Monitor’s Wait/Pulse methods. Framework 4.0 introduces the CountdownEvent and Barrier.

NonBlocking:
These protect access to a common field by calling upon processor primitives.
Thread.MemoryBarrier, Thread.VolatileRead, Thread.VolatileWrite, the volatile keyword, and the Interlocked class.

Wednesday, June 20, 2012

Events - Adding event keyword to a delegate C#

By now I expect we know how to create a delegate and creating an event using the delegate. Let us get into details how adding event keyword changes the way delegate is treated.

An 'event' keyword in front of a delegate does more than exposing the delegate type to Subscribers. Here subscribers refers to classes that add methods to the delegate by  exposed  Broadcaster. Broadcaster is the class that exposes the delegate type with keyword event. 

When compiler discovers event declaration, it does three things that makes event different than a normal delegate. 

void delegate DateTimeHandler(object sender, TimeEventArgs args);
public event DateTimeHandler DataTimeChanged;

1. Compiler creates an event accessor for the just like Property accessor. 

 DateTimeHandler _timeHandler;
public DateTimeHandler DateTimeChanged
{
      add { _timeHandler += value;}
      remove { _timeHandler -= value;}
}

2. Compiler checks for any references to DateTimeChanged inside the Broadcaster class which are other than += and -= operations and redirects them to underlying private _timeHandler delegate field.

3. Compiler translates all the += and -= operations to the event accessor which compiler resolved in step 1. This make behavior + and - different when applied to events.

Differences between a normal delegate and an event in terms of their behaviors are listed below:
      
       1. An event can only be invoked by the broadcaster, in case of delegate it can be invoked by any class that has access to it. In case of delegate a subscriber class can call\broadcast to other subscribers by invoking the delegate.

      2. A normal delegate can be cleared off\ reset by the class that is using it. Any class that has access to delegate can set the delegate to its own method and thereby clear off methods that were assigned by all delegates.

     3. An event can only subscribed to or unsubscribed but the subscribers list or methods associated with event can not be modified or cleared as that can happen with normal delegate.

Consider the case when DateTimeHandler delegate in above example is exposed instead of event DataTimeChaneged. Lets assume class A and Class B use this delegate by giving their own method like below.

Controller.Instance. DateTimeHandler += a.MethodA;
Controller.Instance. DateTimeHandler += b.MethodB;

Some other class C which also can access to event can set DateTimeHandler to  different definition altogether and there by make it loose all other subscribers(i.e. A and B) information and initialization.

Controller.Instance. DateTimeHandler = c.MethodC; //Now controller's delegate loses track of all other subscribers and methods.

Class C can also invoke DateTimeHandler delegate there by calling MethodA and MethodB of  class A and Class B respectively, like below:

if(isBadIntention)
{
      Controller.Instance.DateTimeHandler(this, dummyEventArgs);
}

Reference:
All thanks to "C# 4.0 in Nutshell" by Joe Albahari. I have written here what I have understood from Albahari's book and also from my learning.



Tuesday, June 12, 2012

Fault Contracts in WCF

* Fault Contracts are used to send exception information from a service to client.

* Every information that passes between service and clients must be serialized. Hence this exception information that is required to be passed to client must be of serializable type or it must be a defined data contract which can hold information about exception.

* Fault contracts are used in release environment. This is to make sure clients do not get to see internal implementation of service through exception details.

* When we need all exception details to be passed on to client for debugging or development purposes then we can set includeExceptionDetailsInFaults="true" instead of using fault contract. All we need is to set this attribute in service behavior defined for service endpoint to get all the details of exception at client side. Client has to handle these non-generic exceptions in its code.

<behaviors>
  <serviceBehaviors>
    <behavior name="MyServiceBehavior">
      <serviceMetadata httpGetEnabled="True"/>
      <serviceDebug includeExceptionDetailInFaults="True" />
    </behavior>
  </serviceBehaviors>
</behaviors>

*Every OperationContract that might raise the Fault exception must define the type of the exception in its attribute. 
[OperationContract]
[FaultContract(typeof(UploadFault))]
void UploadData(byte [] data);

Exception Type Defined:

[DataContract]
public class UploadFault
{    
    private string message
    private string errorType;

    [DataMember]
    public string ErrorMessage
    {
        get { return messasge; }
        set { message = value; }
    }

    [DataMember]        
    public string ErrorType
    {
        get { return errorType; }
        set { errorType = value; }
    }
}



Tuesday, June 5, 2012

Callbacks in WCF with an Example

WCF allows service to call back its clients. This way service acts like client and client acts like a service. Client must support\facilitate call back by hosting call back object. Clients are required to support the call back and provide the end point info of call back in their every call to the service. Steps involved in creating a call  back are,

a. Service defines call back type(interface that client's call back handler class must implement). Call back contract need not be marked as Service contract but it is implied.
b. Service contract mentions name of call back type in the service contract attribute.

//call back type

interface IExampleCallBack
{
   [OperationContract] 
   void ClientAction();
}

[ServiceContract(CallbackContract = typeof(IExampleCallback))] 
interface IExampleContract
{
   [OperationContract] 
   void ServiceMethod1();
}

c. Client creates call back host by providing bindings that support call backs, endpoint for the call back and the call back class that implements the call back type. Though this class need not have the attribute ServiceBehavi
d. Client instantiates InstanceContext class by providing call back handler object as parameter to the constructor. 
e. Since Client is supposed to pass call back end point info to the service. Client shall pass this InstanceContext object while creating the proxy.


class ExampleCallBack: IExampleCallBack
{
   public void ClientAction() {//do something based on service call}
}
IExampleCallBack callback = new ExampleCallBack();
InstanceContext context = new InstanceContext(callback);
ExampleContractClient proxy = new ExampleContractClient(context);
proxy.ServiceMethod1();

Service can use OperationContext to get the call back instance and then call methods on call back instance i.e. methods on client side.

IExampleCallBack callback = OperationContext.Current.GetCallbackChannel<IExampleCallBack>();
callback.ClientAction();  

Important Notes and Points about Call backs:

1. HTTP does not support call back because of its connection less nature.  Hence we cannot use BasicHttpBinding or WSHttpBinding for call back mechanism. NetTcpBinding, NamedNetPipeBinding can support call backs due to their underlying bidirectional transport. WSDualHttpBinding also supports call back mechanism by setting two http channels underneath.

2. By default service class is single threaded. Service instance is associated with a lock and only one thread can own this lock at a time, to access the service instance. Invoking the clients code requires service thread to be blocked while call backs are invoked. A deadlock would occur since when clients reply, processing that reply would need ownership of service instance's lock. There are three ways to avoid this deadlock:
    a. Making the service behavior Multithreaded, which means more resources and complex synchronization.
  b. Making the service behavior class as Re-Entrant. A service can be configured as re-entrant by setting the concurrency mode as ConcurrencyMode.Reentrant. When service is configured to be reentrant, service instance still is locked and owned by single thread. But WCF will release the lock quietly before invoking a call back method.
    c. Making the call back methods as OneWay. This can be done  by setting IsOneWay tag on call back operation contract to true. This assures there will be no reply from the client and hence eventhough service is single threaded there would be no deadlock.
    

[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant)]
class ExampleService : IExampleContract
{
   public void ServiceMethod1()
   {
      IMyContractCa llback callback = OperationContext.Current.GetCallbackChannel<IExampleCallback>();
      callback.ClientAction();
   }
}

IsOneWay Attribute:



interface IExampleCallBack
{
   [OperationContract(IsOneWay = true)] 
   void ClientAction();
}