B# .NET Blog News Feed 
Tuesday, July 13, 2010  |  From B# .NET Blog

Introduction

In preparation for some upcoming posts related to LINQ (what else?), Windows PowerShell and Rx, I had to set up a local LDAP-capable directory service. (Hint: It will pay off to read till the very end of the post if you’re wondering what I’m up to...) In this post I’ll walk the reader through the installation, configuration and use of Active Directory Lightweight Directory Services (LDS), formerly known as Active Directory Application Mode (ADAM). Having used the technology several years ago, in relation to the LINQ to Active Directory project (which as an extension to this blog series will receive an update), it was a warm and welcome reencounter.

 

What’s Lightweight Directory Services anyway?

Use of hierarchical storage and auxiliary services provided by technologies like Active Directory often has advantages over alternative designs, e.g. using a relational database. For example, user accounts may be stored in a directory service for an application to make use of. While Active Directory seems the natural habitat to store (and replicate, secure, etc.) additional user information, IT admins will likely point you – the poor developer – at the door when asking to extend the schema. That’s one of the places where LDS comes in, offering the ability to take advantage of the programming model of directory services while keeping your hands off “the one and only AD schema”.

The LDS website quotes other use cases, which I’ll just copy here verbatim:

Active Directory Lightweight Directory Service (AD LDS), formerly known as Active Directory Application Mode, can be used to provide directory services for directory-enabled applications. Instead of using your organization’s AD DS database to store the directory-enabled application data, AD LDS can be used to store the data. AD LDS can be used in conjunction with AD DS so that you can have a central location for security accounts (AD DS) and another location to support the application configuration and directory data (AD LDS). Using AD LDS, you can reduce the overhead associated with Active Directory replication, you do not have to extend the Active Directory schema to support the application, and you can partition the directory structure so that the AD LDS service is only deployed to the servers that need to support the directory-enabled application.

  • Install from Media Generation. The ability to create installation media for AD LDS by using Ntdsutil.exe or Dsdbutil.exe.

  • Auditing. Auditing of changed values within the directory service.

  • Database Mounting Tool. Gives you the ability to view data within snapshots of the database files.

  • Active Directory Sites and Services Support. Gives you the ability to use Active Directory Sites and Services to manage the replication of the AD LDS data changes.

  • Dynamic List of LDIF files. With this feature, you can associate custom LDIF files with the existing default LDIF files used for setup of AD LDS on a server.

  • Recursive Linked-Attribute Queries. LDAP queries can follow nested attribute links to determine additional attribute properties, such as group memberships.

Obviously that last bullet point grabs my attention through I will retain myself from digressing here.

 

Getting started

If you’re running Windows 7, the following explanation is the right one for you. For older versions of the operating system, things are pretty similar though different downloads will have to be used. For Windows Server 2008, a server role exists for LDS. So, assuming you’re on Windows 7, start by downloading the installation media over here. After installing this, you should find an entry “Active Directory Lightweight Directory Services Setup Wizard” under the “Administrative Tools” section in “Control Panel”:

image

LDS allows you to install multiple instances of directory services on the same machine, just like SQL Server allows multiple server instances to co-exist. Each instance has a name and listens on certain ports using the LDP protocol. Starting this wizard – which lives under %SystemRoot%\ADAM\adaminstall.exe, revealing the former product name – brings us here:

image

After clicking Next, we need to decide whether we create a new unique instance that hasn’t any ties with existing instances, or whether we want to create a replicate of an existing instance. For our purposes, the first option is what we need:

image

Next, we’re asked for an instance name. The instance name will be used for the creation of a Windows Service, as well as to store some settings. Each instance will get its own Windows Service. In our sample, we’ll create a directory for the Northwind Employees tables, which we’ll use to create accounts further on.

image

We’re almost there with the baseline configuration. The next question is to specify a port number, both for plain TCP and for SSL-encrypted traffic. The default ports, 389 and 636, are fine for us. Later we’ll be able to connect to the instance by connecting to LDP over port 389, e.g. using the System.DirectoryServices namespace functionality in .NET. Notice every instance of LDS should have its own port number, so only one can be using the default port numbers.

image

Now that we have completed the “physical administration”, the wizard moves on to a bit of “logical administration”. More specifically, we’re given the option to create a directory partition for the application. Here we choose to create such a partition, though in many concrete deployment scenarios you’ll want the application’s setup to create this at runtime. Our partition’s distinguished name will mimic a “Northwind.local” domain containing a partition called “Employees”:

image

After this bit of logical administration, some more physical configuration has to be carried out, specifying the data files location and the account to run the services under. For both, the default settings are fine. Also the administrative account assigned to manage the LDS instance can be kept as the currently logged in user, unless you feel the need to change this in your scenario:

image image

Finally, we’ve arrived at an interesting step where we’re given the option to import LDIF files. And LDIF file, with extension .ldf, contains the definition of a class that can be added to a directory service’s schema. Basically those contain things like attributes and their types. Under the %SystemRoot%\ADAM folder, a set of out-of-the-box .ldf files can be found:

image

Instead of having to run the ldifde.exe tool, the wizard gives us the option to import LDIF files directly. Those classes are documented in various places, such as RFC2798 for inetOrgPerson. On TechNet, information is presented in a more structured manner, e.g revealing that inetOrgPerson is a subclass of user. Custom classes can be defined and imported after setup has completed. In this post, we won’t extend the schema ourselves but we will simply be using the built-in User class so let’s tick that one:

image

After clicking Next, we get a last chance to revisit our settings or can confirm the installation. At this point, the wizard will create the instance – setting up the service – and import the LDIF files.

image image

Congratulations! Your first LDS instance has materialized. If everything went alright, the NorthwindEmployees service should show up:

image

 

Inspecting the directory

To inspect the newly created directory instance, a bunch of tools exist. One is ADSI Edit which you could already see in the Administrative Tools. To set it up, open the MMC-based tool and go to Action, Connect to… In the dialog that appears, specify the server name and choose Schema as the Naming Context.

image

For example, if you want to inspect the User class, simply navigate to the Schema node in the tree and show the properties of the User entry.

image

To visualize the objects in the application partition, connect using the distinguished name specified during the installation:

image

Now it’s possible to create a new object in the directory using the context menu in the content pane:

image

After specifying the class, we get to specify the “CN” name (for common name) of the object. In this case, I’ll use my full name:

image image

We can also set additional attributes, as shown below (using the “physicalDeliveryOfficeName” to specify the office number of the user):

image image

After clicking Set, closing the Attributes dialog and clicking Finish to create the object, we see it pop up in the items view of the ADSI editor snap-in:

image

 

Programmatic population of the directory

Obviously we’re much more interested in a programmatic way to program Directory Services. .NET supports the use of directory services and related protocols (LDAP in particular) through the System.DirectoryServices namespace. In a plain new Console Application, add a reference to the assembly with the same name (don’t both about other assemblies that deal with account management and protocol stuff):

image

For this sample, I’ll also assume the reader got a Northwind SQL database sitting somewhere and knows how to get data out of its Employees table as rich objects. Below is how things look when using the LINQ to SQL designer:

image

We’ll just import a few details about the users; it’s left to the reader to map other properties onto attributes using the documentation about the user directory services class. Just a few lines of code suffice to accomplish the task (assuming the System.DirectoryServices namespace is imported):

static void Main()
{
var path = "LDAP://bartde-hp07/CN=Employees,DC=Northwind,DC=local";
var root = new DirectoryEntry(path);

var ctx = new NorthwindDataContext();
foreach (var e in ctx.Employees)
{
var cn = "CN=" + e.FirstName + e.LastName;

var u = root.Children.Add(cn, "user");
u.Properties["employeeID"].Value = e.EmployeeID;
u.Properties["sn"].Value = e.LastName;
u.Properties["givenName"].Value = e.FirstName;
u.Properties["comment"].Value = e.Notes;
u.Properties["homePhone"].Value = e.HomePhone;
u.Properties["photo"].Value = e.Photo.ToArray();
u.CommitChanges();
}
}

After running this code – obviously changing the LDAP path to reflect your setup – you should see the following in ADSI Edit (after hitting refresh):



image



Now it’s just plain easy to write an application that visualizes the employees with their data. We’ll leave that to the UI-savvy reader (just to tease that segment of my audience, I’ve also imported the employee’s photo as a byte-array).



 


A small preview of what’s coming up



To whet the reader’s appetite about next episodes on this blog, below is a single screenshot illustrating something – IMHO – rather cool (use of LINQ to Active Directory is just an implementation detail below):



image



Note: What’s shown here is the result of a very early experiment done as part of my current job on “LINQ to Anything” here in the “Cloud Data Programmability Team”. Please don’t fantasize about it as being a vNext feature of any product involved whatsoever. The core intent of those experiments is to emphasize the omnipresence of LINQ (and more widely, monads) in today’s (and tomorrow’s) world. While we’re not ready to reveal the “LINQ to Anything” mission in all its glory (rather think of it as “LINQ to the unimaginable”), we can drop some hints.



Stay tuned for more!

Wednesday, July 07, 2010  |  From B# .NET Blog

Introduction

A while ago I was explaining runtime mechanisms like the stack and the heap to some folks. (As an aside, I’m writing a debugger course on “Advanced .NET Debugging with WinDbg with SOS”, which is an ongoing project. Time will tell when it’s ready to hit the streets.) Since the context was functional programming where recursion is a typical substitute (or fuel if you will) for loops, an obvious topic for discussion is the possibility to hit a stack overflow. Armed with my favorite editor, Notepad.exe, and the C# command-line compiler, I quickly entered the following sample to show “looping with recursion” and how disaster can strike:

using System;

class Program
{
    static void Main()
    {
        Rec(0);
    }

    static void Rec(int n)
    {
        if (n % 1024 == 0)
            Console.WriteLine(n);

        Rec(n + 1);
    }
}

The module-based condition in there is to avoid excessive slowdowns due to Console.WriteLine use, which is rather slow due to the way the Win32 console output system works. To my initial surprise, the overflow didn’t come anywhere in sight and the application kept running happily:

image

I rather expected something along the following lines:

image

So, what’s going on here? Though I realized pretty quickly what the root cause is of this unexpected good behavior, I’ll walk the reader through the thought process used to “debug” the application’s code.

 

I made a call, didn’t I?

The first thing to check is that we really are making a recursive call in our Rec method. Obviously ildasm is the way to go to inspect that kind of stuff, so here’s the output which we did expect.

image

In fact, the statement made above – “which we did expect” – is debatable. Couldn’t the compiler just turn the call into a jump right to the start of the method after messing around a bit with the local argument slot that holds argument value n? That way we wouldn’t have to make a call and the code would still work as expected. Essentially what we’re saying here is that the compiler could have turned the recursive call into a loop construct. And indeed, some compilers do exactly that. For example, consider the following F# sample:

#light

let rec Rec n =
   if n % 1024 = 0 then
       printfn "%d" n

   Rec (n + 1)

Rec 0

Notice the explicit indication of the recursive nature of a function by means of the “rec” keyword. After compiling this piece of code using fsc.exe, the following code is shown in Reflector (decompiling to C# syntax) for the Rec function:

image

The mechanics of the printf call are irrelevant. What matters is the code that’s executed after the n++ statement, which isn’t a recursive call to Rec itself. Instead, the compiler has figured out a loop can be used. Hence, no StackOverflowException will result.

Back to the C# sample though. What did protect the code from overflowing the stack? Let’s have some further investigations, but first … some background.

 

Tail calls

One optimization that can be carried out for recursive functions is to spot tail calls and optimize them away into looping – or at a lower level, jumps – constructs. A tail call is basically a call after which the current stack frame is no longer needed upon return from the call. For example, our simple sample can benefit from tail call optimization since the Rec method doesn’t really do anything anymore after returning from the recursive Rec call:

static void Rec(int n)
{
    if (n % 1024 == 0)
        Console.WriteLine(n);

    Rec(n + 1);
}

This kind of optimization – as carried out by F# in the sample shown earlier – can’t always take place. For example, consider the following definition of a factorial method:

static int Fac(int n)
{
    if (n == 0)
        return 1;

    return n * Fac(n – 1);
}

The above has quite a few issues such as the inability to deal with negative values and obviously the arithmetic overflow disaster that will strike when the supplied “n” parameter is too large for the resulting factorial to fit in an Int32. The BigInteger type introduced in .NET 4 (and not in .NET 3.5 as originally planned) would be a better fit for this kind of computation, but let’s ignore this fact for now.

A more relevant issue in the context of our discussion is the code’s use of recursion where a regular loop would suffice, but now I’m making a value judgment of imperative control flow constructs versus a more functional style of using recursion. That’s true nonetheless is the fact that the code above is not immediately amenable for tail call optimization. To see why this is, rewrite the code as follows:

static int Fac(int n)
{
    if (n == 0)
        return 1;

    int t = Fac(n – 1);
    return n * t;

}

See what’s going on? After returning from the recursive call to Fac, we still need to have access to the value of “n” in the current call frame. As a result, we can’t reuse the current stack frame when making the recursive call. Implementing the above in F# (just for the sake of it) and decompiling it, shows the following code:

image

The culprit keeping us from employing tail call optimization is the multiplication instruction needed after the return from the recursive call to Fac. (Note: the second operand to the multiplication was pushed onto the evaluation stack in IL_0005; in fact IL_0006 could also have been a dup instruction.) C# code will be slightly different but achieve the same computation (luckily!).

Sometimes it’s possible to make a function amenable for tail call optimization by carrying out a manual rewrite. In the case of the factorial method, we can employ the following trick:

static int Fac(int n)
{
    return Fac_(n, 1);
}

static int Fac_(int n, int res)
{
    if (n == 0)
        return res;

    return Fac_(n – 1, n * res);
}

Here, we’re not only decrementing n in every recursive call, we’re also keeping the running multiplication at the same time. In my post Jumping the trampoline in C# – Stack-friendly recursion, I explained this principle in the “Don’t stand on my tail!” section. The F# equivalent of the code, shown below, results in tail call optimization once more:

let rec Fac_ n res =
   if n = 0 then
       res
   else
       Fac_ (n - 1) (n * res)

let Fac n =
   Fac_ n 1

The compilation result is shown below:

image

You can clearly see the reuse of local argument slots.

 

A smart JIT

All of this doesn’t yet explain why the original C# code is just working fine though our look at the generated IL code in the second section of this post did reveal the call instruction to really be there. One more party is involved in getting our much beloved piece of C# code to run on the bare metal of the machine: the JIT compiler.

In fact, as soon as I saw the demo not working as intended, the mental click was made to go and check this possibility. Why? Well, the C# compiler doesn’t optimize tail calls into loops, nor does it emit tail.call instructions. The one and only remaining party is the JIT compiler. And indeed, since I’m running on x64 and am using the command-line compiler, the JIT compiler is more aggressive about performing tail call optimizations.

Let’s explain a few things about the previous paragraph. First of all, why does the use of the command-line compiler matter? Won’t the same result pop up if I used a Console Application project in Visual Studio? Not quite, if you’re using Visual Studio 2010 that is. One the decisions made in the last release is to mark executables IL assemblies (managed .exe files) as 32-bit only. That doesn’t mean the image contains 32-bit instructions (in fact, the C# compiler never emits raw assembler); all it does it tell the JIT to only emit 32-bit assembler at runtime, hence resulting in a WOW64 process on 64-bit Windows. The reasons for this are explained in the Rick Byer’s blog post on the subject. In our case, we’re running the C# compiler without the /platform:x86 flag – which now is passed by the default settings of a Visual Studio 2010 executable (not library!) project – therefore resulting in an “AnyCPU” assembly. The corflags.exe tool can be used to verify this claim:

image

In Visual Studio 2010, a new Console Application project will have the 32-bit only flag set by default. Again, reasons for this decision are brought up in Rick’s post on the subject.

image

Indeed, when running the 32-bit only assembly, a StackOverflowException results. An alternative way to tweak the flags of a managed assembly is by using corflags.exe itself, as shown below:

image

It turns out when the 64-bit JIT is involved, i.e. when the AnyCPU Platform target is set – the default on the csc.exe compiler – tail call optimization is carried out for our piece of code. A whole bunch of conditions under which tail calls can be optimized by the various JIT flavors can be found on David Broman’s blog. Grant Richins has been blogging about improvements made in .NET 4 (which don’t really apply to our particular sample). One important change in .NET 4 is the fact the 64-bit JIT now honors the “tail.” prefix on call instructions, which is essential to the success of functional style languages like F# (indeed, F#’s compiler actually has a tailcalls flags, which is on by default due to the language’s nature).

 

Seeing the 64-bit JIT’s work in action

In order to show the reader the generated x64 code for our recursive Rec method definition, we’ll switch gears and open up WinDbg, leveraging the SOS debugger extension. Obviously this requires one to install the Debugging Tools for Windows. Also notice the section’s title to apply to x64. For x86 users, the same experiment can be carried out, revealing the x86 instructions generated without the tail call optimization, hence explaining the overflow observed on 32-bit executions.

Loading the ovf.exe sample (making sure the 32-bit only flag is not set!) under the WinDbg debugger – using windbg.exe ovf.exe – brings us to the first loader breakpoint as shown below. In order to load the Son Of Strike (SOS) debugger extension, set a module load breakpoint for clrjit.dll (which puts us in a convenient spot where the CLR has been sufficiently loaded to use SOS successfully). When that breakpoint hits, the extension can be loaded using .loadby sos clr:

image

Next, we need to set a breakpoint on the Rec method. In my case, the assembly’s file name is ovf.exe, the class is Program and the method is Rec, requiring me to enter the following commands:

image

The !bpmd extension command is used to set a breakpoint based on a MethodDesc – a structure used by the CLR to describe a method. Since the method hasn’t been JIT compiled yet, and hence no physical address for the executable code is available yet, a pending breakpoint is added. Now we let go the debugger and end up hitting the breakpoint which got automatically set when the JIT compiler took care of compiling the method (since it came “in sight” for execution, i.e. because of Main’s call into it). Using the !U – for unassemble – command we can now see the generated code:

image

Notice the presence of code like InitializeStdOutError which is the result from inlining of the Console.WriteLine method’s code. What’s going on here with regards to the tail call behavior is the replacement of a call instruction with a jump simply to the beginning of the generated code. The rest of the code can be deciphered with a bit of x86/x64 knowledge. For one thing, you can recognize the 1024 value (used for our modulo arithmetic) in 3FF which is 1023. The module check stretches over a few instructions that basically use a mask over the value to see whether any of the low bits is non-zero. If so, the value is not dividable by 1024; otherwise, it is. Based on this test (whose value gets stored in eax), a jump is made or not, either going through the path of calling Console.WriteLine or not.

 

Contrasting with the x86 assembler being used

In the x86 setting, we’ll see different code. To show this, let’s use a Console Application in Visual Studio 2010, whose default platform target is – as mentioned earlier – 32-bit. In order to load SOS from inside the Immediate Window, enable the native debugger through the project settings:

image

Using similar motions as before, we can load the SOS extension upon hitting a breakpoint. Instead of using !bpmd, we can use !name2ee to resolve the JITTED Code Address for the given symbol, in this case the Program.Rec method:

image

Inspecting the generated code, one will encounter the following call instruction to the same method. This is the regular recursive call without any tail call optimization carried out. Obviously this will cause a StackOverflowException to occur. Also notice from the output below that the Console.WriteLine method call didn’t get inlined in this particular x86 case.

image

 

Revisiting the tail. instruction prefix

As referred to before, the IL instruction set has a tail. prefix for call instructions. Before .NET 4, this was merely a hint to the JIT compiler. For x86, it was (and still is) a request of the IL generator to the JIT compiler to perform a tail call. For x64, prior to CLR 4.0, this request was not always granted. For our x86 case, we can have a go at inserting the tail. prefix for the recursive call in the code generated by the C# compiler (which doesn’t emit this instruction by itself as explained before). Using ildasm’s /out parameter, you can export the ovf.exe IL code to a text file. Notice the COR flags have been set to “32-bit required” using either the x86 platform target flag on csc.exe or by using corflags /32bit+:

image

Now tweak the code of Rec as shown below. After a tail call instruction, no further code should execute other than a ret. If this rule isn’t obeyed, the CLR will throw an exception signaling an invalid program. Hence we remove the nop instruction that resulted from a non-optimized build (Debug build or csc.exe use without /o+ flag). To turn the call into a tail call one, we add the “tail.” prefix. Don’t forget the space after the dot though:

image

The session of roundtripping through ILDASM and ILASM with the manual tweak in Notepad shown above is shown here:

image

With this change in place, the ovf.exe will keep on running without overflowing the stack. Looking at the generated code through the debugger, one would see a jmp instruction instead of a call, explaining the fixed behavior.

 

Conclusion

Tail calls are the bread and butter of iterative programs written in a functional style. As such, the CLR has evolved to support tail call optimization in the JIT when the tail. prefix is present, e.g. as emitted by the F# compiler when needed (though the IL code itself may be turned into a loop by the compiler itself). One thing to know is that on x64, the JIT is more aggressive about detecting and carrying out tail recursive calls (since it has a good value proposition with regards to “runtime intelligence cost” versus “speed-up factor”). For more information, I strongly recommend you to have a look at the CLR team’s blog: Tail Call Improvements in .NET Framework 4.

Tuesday, July 06, 2010  |  From B# .NET Blog

Introduction

Recently I’ve been playing with Windows PowerShell 2.0 again, in the context of my day-to-day activities. One hint should suffice for the reader to get an idea of what’s going on: push-based collections. While I’ll follow up on this subject pretty soon, this precursor post explains one of the things I had to work around.

 

PowerShell: a managed application or not?

Being designed around the concept of managed object pipelines, one may expect powershell.exe to be a managed executable. However, it turns out this isn’t the case completely. If you try to run ildasm.exe on the PowerShell executable (which lives in %windir%\system32\WindowsPowerShell\v1.0 despite the 2.0 version number, due to setup complications), you get the following message:

image

So much for the managed executable theory. What else can be going on to give PowerShell the power of managed objects. Well, it could be hosting the CLR. To check this theory, we can use the dumpbin.exe tool, using the /imports flag, checking for mscoree.dll functions being called. And indeed, we encounter the CorBindToRuntimeEx function that’s been the way to host the CLR prior to .NET 4’s in-process side-by-side introduction (a feature I should blog about as well since I wrote a CLR host for in-process side-by-side testing on my prior team here at Microsoft).

image

One of the parameters passed to CorBindToRuntimeEx is the version of the CLR to be loaded. Geeks can use WinDbg or cdb to set a breakpoint on this function and investigate the version parameter passed to it by the PowerShell code:

image

Notice the old code name of PowerShell still being revealed in the third stack frame (from the top). In order to hit this breakpoint on a machine that has .NET 4 installed, I’ve used the mscoreei.dll module rather than mscoree.dll. The latter has become a super-shim in the System32 folder, while the former one is where the CLR shim really lives (“i” stands for “implementation”). This refactoring has been done to aid in servicing the CLR on different version of Windows, where the operating system “owns” the files in the System32 folder.

Based on this experiment, it’s crystal clear the CLR is hosted by Windows PowerShell, with hardcoded affinity to v2.0.50727. This is in fact a good thing since automatic roll-forward to whatever the latest version of the CLR is on the machine could cause incompatibilities. One can expect future versions of Windows PowerShell to be based on more recent versions of the CLR, once all required testing has been carried out. (And in that case, one will likely use the new “metahost” CLR hosting APIs.)

 

Loading .NET v4 code in PowerShell v2.0

The obvious question with regards to some of the stuff I’ve been working on was whether or not we can run .NET v4 code in Windows PowerShell v2.0? It shouldn’t be a surprise this won’t work as-is, since the v2.0 CLR is loaded by the PowerShell host. Even if the hosting APIs weren’t involved and the managed executable were compiled against .NET v2.0, that version’s CLR would take precedence. This is in fact the case for ISE:

image

Trying to load a v4.0 assembly in Windows PowerShell v2.0 pathetically fails – as expected – with the following message:

image

So, what are the options to get this to work? Let’s have a look.

Warning:  None of those hacks are officially supported. At this point, Windows PowerShell is a CLR 2.0 application, capable of loading and executing code targeting .NET 2.0 through .NET 3.5 SP1 (all of which run on the second major version of the CLR).

 

Option 1 – Hacking the parameter passed to CorBindToRuntimeEx

If we just need an ad-hoc test of Windows PowerShell v2.0 running on CLR v4.0, we can take advantage of WinDbg once more. Simply break on the CorBindToRuntimeEx and replace the v2.0.50727 string in memory by the v4.0 version, i.e. v4.0.30319. The “eu” command used for this purpose stands for “edit memory Unicode”:

image

If we let go the debugger after this tweak, we’ll ultimately get to see Windows PowerShell running seemingly fine, this time on CLR 4.0. One proof is the fact we can load the .NET 4 assembly we tried to load before:

image

Another proof can be found by looking at the DLL list for the PowerShell.exe instance in Process Explorer:

image

No longer we see mscorwks.dll (which is indicative of CLR 2.0 or below), but a clr.dll module appears instead. While this hack works fine for single-shot experiments, we may want to get something more usable for demo and development purposes.

Note:  Another option – not illustrated here – would be to use Detours and intercept the CorBindToRuntimeEx call programmatically, performing the same parameter substitution as the one we’ve shown through the lenses of the debugger. Notice though the use of CorBindToRuntimeEx is deprecated since .NET 4, so this is and stays a bit of a hack either way.

 

Option 2 – Hosting Windows PowerShell yourself

The second option we’ll explore is to host Windows PowerShell ourselves, not by hosting the CLR and mimicking what PowerShell.exe does, but by using the APIs provided for this purpose. In particular, the ConsoleShell class is of use to achieve this. Moreover, besides simply hosting PowerShell in a CLR v4 process, we can also load snap-ins out of the box. But first things first, starting with a .NET 4 Console Application, add a reference to the System.Management.Automation and Microsoft.PowerShell.ConsoleHost assemblies which can be found under %programfiles%\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0:

image

The little bit of code required to get basic hosting to work is shown below:

using System;
using System.Management.Automation.Runspaces;
using Microsoft.PowerShell;

namespace PSHostCLRv4
{
class Program
{
static int Main(string[] args)
{
var config = RunspaceConfiguration.Create();
return ConsoleShell.Start(
config,
"Windows PowerShell - Hosted on CLR v4\nCopyright (C) 2010 Microsoft Corporation. All rights reserved.",
"",
args
);
}
}
}

Using the RunspaceConfiguration object, it’s possible to load snap-ins if desired. Since that would reveal the reason I was doing this experiment, I won’t go into detail on that just yet :-). The tip in the introduction should suffice to get an idea of the experiment I’m referring to. Here’s the output of the above:



image



While this hosting on .NET 4 is all done using legitimate APIs, it’s better to be conservative when it comes to using this in production since PowerShell hasn’t been blessed to be hosted on .NET 4. While compatibility between CLR versions and for the framework assemblies has been a huge priority for the .NET teams (I was there when it happened), everything should be fine. But the slightest bit of pixy dust (e.g. changes in timing for threading, a classic!) could reveal some issue. Till further notice, use this technique only for testing and experimentation.



Enjoy and stay tuned for more PowerShell fun (combined with other technologies)!

Friday, July 02, 2010  |  From B# .NET Blog

A quick update to my readers on a few little subjects. First of all, some people have noticed my blog welcomed readers with a not-so-sweet 404 error message the last few days. Turned out my monthly bandwidth was exceeded which was enough reason for my hosting provider to take the thing offline.

image

Since this is quite inconvenient I’ve started some migration of image content to another domain, which is work in progress and should (hopefully) prevent the issue from occurring again. Other measures will be taken to limit the download volumes.

Secondly, many others have noticed it’s been quite silent on my blog lately. As my colleague Wes warned me, once you start enjoying every day of functional programming hacking on Erik’s team, time for blogging steadily decreases. What we call “hacking” has been applied to many projects we’ve been working on over here in the Cloud Programmability Team, some of which are yet undisclosed. The most visible one today is obviously the Reactive Extensions both for .NET and for JavaScript, which I’ve been evangelizing both within and outside the company. Another one which I can only give the name for is dubbed “LINQ to Anything” that’s – as you can imagine – keeping me busy and inspired on a daily and nightly basis. On top of all of this, I’ve got some other writing projects going on that are nearing completion (finally).

Anyway, the big plan is to break the silence and start blogging again about our established technologies, including Rx in all its glory. Subjects will include continuation passing style, duality between IEnumerable<T> and IObservable<T>, parameterization for concurrency, discussion of the plethora of operators available, a good portion of monads for sure, the IQbservable<T> interface (no, I won’t discuss the color of the bikeshed) and one of its applications (LINQ to WMI Events), etc. Stay tuned for a series on those subjects starting in the hopefully very near future.

See you soon!

Monday, April 19, 2010  |  From B# .NET Blog

During my last tour I’ve been collecting quite some fundamental and introductory Rx samples as illustrations with my presentations on the topic. As promised, I’m sharing those out through my blog. More Rx content is to follow in the (hopefully near) future, with an exhaustive discussion of various design principles and choices, the underlying theoretical foundation of Rx and coverage of lots of operators.

In the meantime, download the sample project here. While the project targets Visual Studio 2010 RTM, you can simply take the Program.cs file and build a Visual Studio 2008 project around it, referencing the necessary Rx assemblies (which you can download from DevLabs).

Enjoy!

Sunday, March 28, 2010  |  From B# .NET Blog

As part of my three week African and European tour I have the honor to talk to the local Belgian Visual Studio User Group (VISUG) on April 6th in the Microsoft Belux offices in Zaventem, Belgium. Seats are limited, but there’s still time for you to register. More info can be found here. Oh, and there will be catering as well :-). Other opportunities to see me are on TechDays Belgium and DevDays Netherlands, which are both held next week. I’ll post resources about Rx talks to my blog later on and hope to find the bandwidth to write an extensive series on the topic, so stay tuned!

Friday, March 05, 2010  |  From B# .NET Blog

It's been a long time I've written epic blog posts over here, but for a good reason. We've been working very hard on getting a new Rx release out the door and I'm proud to announce it's available now through http://msdn.microsoft.com/en-us/devlabs/ee794896.aspx. Notice we got a .NET 4 RC compatible download available as well, so you can play with the latest and greatest of technologies in one big jar :-). More goodness will follow later, so stay tuned!


At some point in the foreseeable future, I'll start a series on how Rx works and what its operators are as well. If you have any particular topics you'd like to see covered, don't hesitate to let me know through my blog. In the meantime, make sure to evaporate all your feedback on the forums at http://social.msdn.microsoft.com/Forums/en-US/rx/threads. We love to hear what you think, what operators you believe are missing, any bugs you find, etc.


Update: We also have published a video on the new release at http://channel9.msdn.com/posts/J.Van.Gogh/Your-RxNET-Prescription-Has-Been-Refilled.


Have fun!
Bart @ Rx

Monday, January 11, 2010  |  From B# .NET Blog

Slightly over two years after arriving here in Redmond to work on the WPF team, time has come for me to make a switch and pursue other opportunities within the company. Starting January 13th, I’ll be working on the SQL Cloud Data Programmability Team on various projects related to democratizing the cloud. While we have much more rabbits sitting in our magician hats, Rx is the first big deliverable we’re working on.

For my blog, there won’t be much change as I’ve always written on topics related to what I’ll be working on: language innovation, data access, LINQ, type systems, lambda fun, etc. I’m planning to stay committed to blogging and other evangelism activities, including speaking engagements from time to time, so feel free to ping me if I’m in your proximity (or if you’re visiting our campus). Next up and confirmed are TechDays “low lands” in Belgium and the Netherlands, end of March.

Needless to say, I’m thrilled to have this opportunity of working together with a relatively small group of smart and passionate people, on the things I’d spend all my free time on anyway. Having this one-to-one alignment between day-to-day professional activities at work and all sorts of free time hacking projects is like a dream coming true. Thanks Danny, Erik, Jeffrey, Mark and Wes for taking me on board.

Expect to see more Rx blogging love over here, and watch out for more goodness to come your way in the foreseeable future. In the meantime, check out the following resources on the matter:

Please keep the feedback on Rx coming: help us, help you!

Thursday, January 07, 2010  |  From B# .NET Blog

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about EnumerableEx’s facilities to tame side-effects in a functionally inspired manner:

image

 

To side effect or not to side effect?

Being rooted in query comprehensions as seen in various functional programming languages (including (the) pure one(s)), one would expect LINQ to have a very functional basis. Indeed it has, but being hosted in various not functionally pure languages like C# and Visual Basic, odds are off reasoning about side-effects in a meaningful and doable manner. As we’ve seen before, when talking about the Do and Run operators, it’s perfectly possible for a query to exhibit side-effects during iteration. You don’t even have to look that far, since every lambda passed to a query operator is an opportunity of introducing effects. The delayed execution nature of LINQ queries makes that those effects appear at the point of query execution. So far, nothing new.

So, the philosophical question ought to be whether or not we should embrace side-effects or go for absolute purity. While the latter would be preferable for various reasons, it’s not enforceable through the hosting languages for LINQ, so maybe we should exploit side-effects if we really want to do so. The flip side of this train of thought is that those side-effects could come and get us if we’re not careful, especially when queries get executed multiple times, potentially as part of a bigger query. In such a case, you’d likely not want effects to be duplicated. Below is a sample of such a problematic query expression:

var xrs = EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _);
xrs.Zip(xrs, (l, r) => l + r).Take(10).Run(Console.WriteLine);

Using Generate, we generate a sequence of random numbers. Recall the first argument is the state of the anamorphic Generate operator, which we get passed in the lambdas following it: once to produce an output sequence (just a single random number in our case) and once to iterate (just keeping the same random number generator here). What’s more important is we’re relying on the side-effect of reading the random number generator which, as the name implies, provides random answers to the Next inquiry every time it gets called. In essence, the side-effect can (not) be seen by looking at the signature of Random.Next, which says it returns an int. In .NET this means the method may return the same int every time it gets called, but there are no guarantees whatsoever (as there would be in pure functional programming languages).



This side-effect, innocent and intentional as it may seem, comes and gets us if we perform a Zip on the sequence with itself. Since Zip iterates both sides, we’re really triggering separate enumeration (“GetEnumerator”) over the same sequence two times. Though it’s the same sequence object, each of its iterations will produce different results. As a result, the expected invariant of the Zip’s output being only even numbers (based on the assumption l and r would be the same as they’re produced by the same sequence) doesn’t hold:




52

114


112


103

41


135



78


114


59

137


While random number generation is a pretty innocent side-effect, not having it under control properly can lead to unexpected results as shown above. We can visualize this nicely using another side-effect introduced by Do:




var xrs = EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Do(xr => Console.WriteLine("! -> " + xr));
xrs.Zip(xrs, (l, r) => l + r).Take(10).Run(Console.WriteLine);


This will print a message for every number flowing out of the random number generating sequence, as shown below:




! -> 97

! -> 78


175


! -> 11


! -> 6


17


! -> 40


! -> 17


57


! -> 92


! -> 63


155


! -> 70


! -> 13


83


! -> 41


! -> 1


42


! -> 64


! -> 76


140


! -> 30


! -> 71


101


! -> 1


! -> 81


82


! -> 65


! -> 45


110


If we look a bit further to the original query, we come to the conclusion we can’t apply any form of equational reasoning anymore: it seems that the common subexpression “xrs” is not “equal” (as in exposing the same results) in both use sites. The immediate reason in the case of LINQ is the delayed execution, which is a good thing as our Generate call produces an infinite sequence. More broadly, it’s the side-effect that lies at the heart of the problem as equational reasoning breaks down in such a setting. For that very reason, side-effect permitting languages have a much harder time carrying out optimizations to code and need to be very strict about specifying the order in which operations are performed (e.g. in C#, arguments to a method call – which is always “call-by-value” – are evaluated in a left-to-right order).



Moving Take(10) up doesn’t change the delayed characteristic either:





var xrs = EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Take(10)
.Do(xr => Console.WriteLine("! -> " + xr));
xrs.Zip(xrs, (l, r) => l + r).Run(Console.WriteLine);


What would help is forcing the common subexpression’s query to execute, persisting (= caching) its results in memory, before feeding them in to the expression using it multiple times:




var xrs = EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Take(10).ToArray()
.Do(xr => Console.WriteLine("! -> " + xr));
xrs.Zip(xrs, (l, r) => l + r).Run(Console.WriteLine);


Don’t forget the Take(10) call though, as calling ToArray (or ToList) on an infinite sequence is not quite advised on today’s machines with finite amounts of memory. It’s clear such hacking is quite brittle and it breaks the delayed execution nature of the query expression. In other words, you can’t really hand out the resulting expression to a caller for it to call when it needs results (if it ever does). We’re too eager about evaluating (part of) the query, just to be able to tame the side-effect:




var xrs = EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Take(10).ToArray();

var randomEvens = xrs.Zip(xrs, (l, r) => l + r);


// What if the consumer of randomEvens expects different results on each enumeration... Hard cheese! 
randomEvens.Run(Console.WriteLine);
randomEvens.Run(Console.WriteLine);


It’s clear that we need some more tools in our toolbox to tame desired side-effects when needed. That’s exactly what this post focuses on.



 


Option 1: Do nothing with Let



A first way to approach side-effects is to embrace them as-is. We just allow multiple enumerations of the same sequence to yield different results (or more generally, replicate side-effects). However, we can provide a bit more syntactical convenience in writing queries that reuse the same common subexpression in multiple places. In the above, we had to introduce an intermediate variable to store the common expression in, ready for reuse further on:




var xrs = EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _);
xrs.Zip(xrs, (l, r) => l + r).Take(10).Run(Console.WriteLine);

Can’t we somehow write this more fluently? The answer is yes, using the Let operator which passes its left-hand side to a lambda expression that can potentially use it multiple times:




EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Let(xrs => xrs.Zip(xrs, (l, r) => l + r)).Take(10).Run(Console.WriteLine);

You can guess the signature of Let just by looking at the use above, but let’s include it for completeness:




public static IEnumerable<TResult> Let<TSource, TResult>(this IEnumerable<TSource> source, Func<IEnumerable<TSource>, IEnumerable<TResult>> function);


Because of the call-by-value nature of the languages we’re talking about, the expression used for the source parameter will be fully evaluated (not the same as enumerated!) before Let gets called, so we can feed it (again in a call-by-value manner) to the function which then can refer to it multiple times by means of its lambda expression parameter (in the sample above this is “xrs”). Let comes from the world of functional languages where it takes the following form:




let x = y in z


means (in C#-ish syntax)




(x => z)(y)


In other words, there’s a hidden function x => z sitting in a let-expression and the “value” for x (which is y in the sample) gets passed to it, providing the result for the entire let-expression. In EnumerableEx.Let, the function is clear as the second parameter, and the role of “y” is fulfilled by the source parameter. One could create a Let-form for any object as follows (not recommended because of the unrestricted extension method):




public static R Let<T, R>(this T t, Func<T, R> f)
{
return f(t);
}

With this, you can write things like this:




Console.WriteLine(DateTime.Now.Let(x => x - x).Ticks);

This will print 0 ticks for sure, since the same DateTime.Now is used for x on both sides of the subtraction. If we were to expand this expression by substituting DateTime.Now for x, we’d get something different due to the duplicate evaluation of DateTime.Now, exposing the side-effect of reading from the system clock:




Console.WriteLine((DateTime.Now - DateTime.Now).Ticks);


(Pop quiz: What sign will the above Ticks result have? Is it possible for the above to return 0 sometimes?)



 


Option 2: Cache on demand a.k.a. MemoizeAll



As we’ve seen before, on way to get rid of the side-effect replication is by forcing eager evaluation of the sequence through operators like ToArray or ToList. However, those are a bit too eager in various ways:



  • They persist the whole sequence, which won’t work for infinite sequences.
  • They do so on the spot, i.e. the eagerness can’t be delayed till a later point (‘on demand”).

The last problem can be worked around using the Defer operator, but the first one is still a problem requiring another operator. Both those things are what MemoizeAll provides for, essentially persisting the sequence bit-by-bit upon consumption. This is achieved by exposing the enumerable while only maintaining a single enumerator to its source:




image


In the figure above, this is illustrated. Red indicates a fetch operation where the original source’s iterator makes progress as an element is requested that hasn’t been fetched before. Green indicates persisted (cached, memoized) objects. Gray indicates elements in the source that have been fetched and hence belong to the past from the (single) source-iterators point of view: MemoizeAll won’t ever request those again. Applying this operator to our running sample using Zip will produce results with the expected invariant:




var xrs = EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Do(xr => Console.WriteLine("! -> " + xr))
.MemoizeAll();
xrs.Do(xr => Console.WriteLine("L -> " + xr)).Zip(
xrs.Do(xr => Console.WriteLine("R -> " + xr)),
(l, r) => l + r).Take(10).Run(Console.WriteLine);


Now we’ll see the xrs-triggered Do messages being printed only 10 times since the same element will be consumed by the two uses of xrs within Zip. The result looks as follows, showing how the right consumer of Zip never causes a fetch back to the random number generating source due to the internal caching by MemoizeAll:




! -> 71

L -> 71


R -> 71


142


! -> 18


L -> 18


R -> 18


36


! -> 12


L -> 12


R -> 12


24


! -> 96


L -> 96


R -> 96


192


! -> 1


L -> 1


R -> 1


2


! -> 54


L -> 54


R -> 54


108


! -> 9


L -> 9


R -> 9


18


! -> 87


L -> 87


R -> 87


174


! -> 18


L -> 18


R -> 18


36


! -> 12


L -> 12


R -> 12


24


What about lifetime of the source’s single enumerator? As soon as one of the consumers reaches the end of the underlying sequence, we got all elements cached and are prepared to any possible inquiry for elements on the output side of the MemoizeAll operator, hence it’s possible to dispose of the original enumerator. It should also be noted that memoization operators use materialization internally to capture the behavior of the sequence to expose to all consumers. This means exceptions are captured as Notification<T> so they’re repeatable to all consumers:




var xes = EnumerableEx.Throw<int>(new Exception()).StartWith(1).MemoizeAll();
xes.Catch((Exception _) => EnumerableEx.Return(42)).Run(Console.WriteLine);
xes.Catch((Exception _) => EnumerableEx.Return(42)).Run(Console.WriteLine);

The above will therefore print 1, 42 twice. In other words, the source blowing up during fetching by MemoizeAll doesn’t terminate other consumers that haven’t reached the faulty state yet (but if they iterate long enough, they’ll eventually see it exactly as the original consumer did).



Finally, what’s All about MemoizeAll? In short: the cache used by the operator can grow infinitely large. The difference compared to ToArray and ToList has been explained before, but it’s worth repeating it: MemoizeAll doesn’t fetch its source’s results on the spot but only makes progress through the source’s enumerator when one of the consumers requests an element that hasn’t been retrieved yet. Call it a piecemeal ToList if you want.



 


Option 3: Memoize, but less “conservative”



While MemoizeAll does the trick to avoid repetition of side-effects, it’s quite conservative in its caching as it never throws away elements it has retrieved. You never know whether someone – like a slow enumerator or a whole new enumerator over the memoized result – will request the data again, so a general-purpose Memoize can’t throw away a thing. However, if you know the behavior of the consumers of the memoized source, you can be more efficient about it and use Memoize specifying a buffer size. In our running sample of Zip we know that both uses of the source for the left and right inputs to Zip will be enumerated at the same pace, so it suffices to keep the last element in the cache in order for the right enumerator to be able to see the element the left enumerator just saw. Memoize with buffer size 1 does exactly that:




var xrs = EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Do(xr => Console.WriteLine("! -> " + xr))
.Memoize(1);
xrs.Do(xr => Console.WriteLine("L -> " + xr)).Zip(
xrs.Do(xr => Console.WriteLine("R -> " + xr)),
(l, r) => l + r).Take(10).Run(Console.WriteLine);


In pictures, this looks as follows:




image


Another valid buffer size – also the default – is zero. It’s left to the reader, as an exercise, to come up with a plausible theory for what that one’s behavior should be and to depict this case graphically.



(Question: Would it be possible to provide a “smart” memoization operator that knows exactly when it can abandon items in the front of its cache? Why (not)?)



 


Derived forms



The difference between Let and the Memoize operators is that the former feeds in a view on an IEnumerable<T> source to a function, allowing that one to refer to the source multiple times in the act of producing a source in return. Let is, as we saw, nothing but fancy function application in a “fluent” left-to-right dataflowy way. Derived forms of Memoize exist that have the same form where a function is fed a memoized data source:



  • Replay is Memoize on steroids
  • Publish is MemoizeAll on steroids

The following snippets show just what those operators do (modulo parameter checks):




public static IEnumerable<TResult> Publish<TSource, TResult>(this IEnumerable<TSource> source,
Func<IEnumerable<TSource>, IEnumerable<TResult>> function)
{
return function(source.MemoizeAll());
}

public static IEnumerable<TResult> Publish<TSource, TResult>(this IEnumerable<TSource> source,
Func<IEnumerable<TSource>, IEnumerable<TResult>> function,
TSource initialValue)
{
return function(source.MemoizeAll().StartWith(initialValue));
}

public static IEnumerable<TResult> Replay<TSource, TResult>(this IEnumerable<TSource> source,
Func<IEnumerable<TSource>, IEnumerable<TResult>> function)
{
return function(source.Memoize());
}

public static IEnumerable<TResult> Replay<TSource, TResult>(this IEnumerable<TSource> source,
Func<IEnumerable<TSource>, IEnumerable<TResult>> function,
int bufferSize)
{
return function(source.Memoize(bufferSize));
}


So we could rewrite our Zip sample in a variety of ways, the following being the cleanest one-sized buffer variant:




EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Do(xr => Console.WriteLine("! -> " + xr))
.Replay(xrs => xrs.Do(xr => Console.WriteLine("L -> " + xr)).Zip(
xrs.Do(xr => Console.WriteLine("R -> " + xr)),
(l, r) => l + r),
1)
.Take(10).Run(Console.WriteLine);


 


Option 4: Fair (?) sharing with Share and Prune



The Share operator shares an IEnumerator<T> for any number of consumers of an IEnumerable<T>, hence avoiding duplication of side-effects. In addition, it also guarantees that no two consumers can see the same element, so in effect the Share operator has the potential of distributing elements across different consumers. Looking at it from another angle, one consumer can steal elements from the source, preventing another consumer from seeing it. Prune is derived from Share as follows:




public static IEnumerable<TResult> Prune<TSource, TResult>(this IEnumerable<TSource> source,
Func<IEnumerable<TSource>, IEnumerable<TResult>> function)
{
return function(source.Share());
}


The naming for Prune follows from the effect consumers inside the function have on the sequence being shared: each one consuming data effectively prunes elements from the head of the sequence, so that others cannot see those anymore. An example is shown below, showing another way a Zip could go wrong (practical scenarios for this operator would involve sharing) since the left and right consumers both advance the cursor of the same shared enumerator under the hood:




EnumerableEx.Generate(new Random(), rnd => EnumerableEx.Return(rnd.Next(100)), /* iterate */ _ => _)
.Do(xr => Console.WriteLine("! -> " + xr))
.Prune(xrs => xrs.Do(xr => Console.WriteLine("L -> " + xr)).Zip(
xrs.Do(xr => Console.WriteLine("R -> " + xr)),
(l, r) => l + r))
.Take(10).Run(Console.WriteLine);

The result of this is of interest since the logging will reveal the sharing characteristic. Looking at the first Do’s output we’ll see it gets triggered by any consumer on the inside of Prune:




! -> 37

L -> 37


! -> 51


R -> 51


88


! -> 98


L -> 98


! -> 89


R -> 89


187


! -> 4


L -> 4


! -> 71


R -> 71


75


! -> 43


L -> 43


! -> 30


R -> 30


73


! -> 18


L -> 18


! -> 24


R -> 24


42


! -> 17


L -> 17


! -> 41


R -> 41


58


! -> 45


L -> 45


! -> 68


R -> 68


113


! -> 83


L -> 83


! -> 53


R -> 53


136


! -> 64


L -> 64


! -> 69


R -> 69


133


! -> 0


L -> 0


! -> 22


R -> 22


22


In pictures, this looks as follows:




image


Exercise: Can you guess how Memoize(0) differs from Share?



Quiz: What should be the behavior of the following fragment? (Tip: you got to know what two from clauses result in and how they execute)




Enumerable.Range(0, 10)
.Prune(xs => from x in xs.Zip(xs, (l, r) => l + r)
from y in xs
select x + y)
.Run(Console.WriteLine);


 


Next on More LINQ



A look at the Asynchronous and Remotable operators, dealing with some infrastructure-related concepts, wrapping up this series for now.

Sunday, January 03, 2010  |  From B# .NET Blog

select top 9 [Subject] from dbo.cs_Posts
where postlevel = 1 and usertime < '01/01/2010' and usertime >= '01/01/2009'
order by TotalViews desc



Forgive me for the classic SQL, but here are the results with some short annotations inline:



  1. (Mis)using C# 4.0 Dynamic – Type-Free Lambda Calculus, Church Numerals, and more





    Uses the new C# 4.0 dynamic feature to implement the type-free lambda calculus consisting of an abstraction and application operator. Besides talking about the fundamentals of lambda calculus, this post shows how to implement the SKI combinators and Church Booleans, Church numerals and even recursive functions.


     
  2. LINQ to Ducks – Bringing Back The Duck-Typed foreach Statement To LINQ





    Since LINQ to Objects is layered on top of IEnumerable<T>, it doesn’t work against objects that just happen to implement the enumeration pattern consisting of GetEnumerator, MoveNext and Current. Since the foreach statement actually does work against such data sources, we bring back this duck typing to LINQ using AsDuckEnumerable<T>().


     
  3. Type-Free Lambda Calculus in C#, Pre-4.0 – Defining the Lambda Language Runtime (LLR)





    We repeat the exercise of the first blog post but now without C# 4.0 dynamic features., encoding application and abstraction operators using none less that exceptions. Those primitives define what I call the Lambda Language Runtime (LLR), which we use subsequently to implement a bunch of samples similar to the ones in the first post.


     
  4. Taming Your Sequence’s Side-Effects Through IEnumerable.Let





    Enumerable sequences can exhibit side-effects for various reasons ranging from side-effecting filter predicates to iterators with side-effecting imperative code interwoven in them. The Let operator introduced in this post helps you to keep those side-effects under control when multiple “stable” enumerations over the sequence are needed.


     
  5. Statement Trees With Less Pain – Follow-Up on System.Linq.Expressions v4.0





    The introduction of the DLR in the .NET 4 release brings us not only dynamic typing but also full-fledged statement trees as an upgrade to the existing LINQ expression trees. Here we realize a prime number generator using statement trees and runtime compilation, reusing expression trees emitted by the C# compiler where possible.


     
  6. LINQ to Z3 – Theorem Solving on Steroids – Part 1





    LINQifying Microsoft Research’s Z3 theorem solver has been one of my long-running side-projects. This most recent write-up on the matter illustrates the concept of a LINQ-enabled Theorem<T> and the required visitor implementation to interop with the Z3 libraries. Finally, we show a Sudoku and Kakuro solver expressed in LINQ.


     
  7. Expression Trees, Take Two – Introducing System.Linq.Expressions v4.0





    Just like post 5, we have a look at the .NET 4 expression tree support, now including statement trees. Besides pointing out the new tree node types, we show dynamic compilation and inspect the generated IL code using the SOS debugger’s dumpil command. In post 5, we follow up by showing how to reuse C# 3.0 expression tree support.


     
  8. Unlambda .NET – With a Big Dose of C# 3.0 Lambdas





    Esoteric programming languages are good topics for Crazy Sundays posts. In this one we had a look at how to implement the Unlambda language – based on SKI combinators and with “little” complications like Call/CC – using C# 3.0 with lots of lambda expressions. To satisfy our curiosity, we run a Fibonacci sample program.


     
  9. C# 4.0 Feature Focus – Part 4 – Co- and Contra-Variance for Generic Delegate and Interface Types





    Generic co- and contra-variance is most likely the most obscure C# 4.0 feature, so I decided to give it a bit more attention using samples of the everyday world (like apples and tomatoes). We explain why arrays are unsafe for covariance and how generic variance gets things right, also increasing your expressiveness.

In conclusion, it seems esoteric and foundational posts are quite popular, but then again that’s what I write about most. For 2010, I hope to please my readers’ interests even further with the occasional “stunt coding”, “brain pain” and “mind bending” (based on Twitter quotes in 2009). If there are particular topics you’d like to see covered, feel free to let me know. So, thanks again for reading in 2009 (good for slightly over 1TB – no that’s not a typo – of data transfer from my hoster) and hope to see you back in 2010!

Saturday, January 02, 2010  |  From B# .NET Blog

Introduced in my previous blog post on The Essence of LINQ – MinLINQ, the first release of this project is now available for reference at the LINQSQO CodePlex website at http://linqsqo.codeplex.com. Compared to the write-up over here in my previous post, there are a few small differences and caveats:

  • Only FEnumerable functionality is available currently; the FObservable dual may follow later.
  • Option<T> has been renamed to Maybe<T>, to be CLS compliant and avoid clashes with the VB keyword.
  • Some operators are not provided, in particular GroupBy, GroupJoin and Join. They’re left as an exercise.
  • A few operator implementations are categorized as “cheaters” since they roundtrip through System.Linq.
  • Don’t nag about performance. The MinLINQ code base is by no means optimal and so be it.
  • Very few System.Interactive operators are included since those often require extra foundations (such as concurrency).

A few highlights:

  • FEnumerable.Essentials.cs is where the fun starts. Here the three primitives – Ana, Bind and Cata – form the ABC of LINQ.
  • There’s a Naturals() constructor function generating an infinite sequence of natural numbers, used in operators that use indexes.
  • OrderBy and ThenBy are supported through roundtripping to System.Linq with a handy trick to keep track of IOrderedEnumerable<T>.
  • As a sample, I’ve included Luke Hoban’s LINQified RayTracer with AsFEnumerable and AsEnumerable roundtripping. It works just fine.
  • Creating an architectural diagram in Visual Studio 2010 yields the following result (not meant to zoomed in), where I’ve used the following colors:
    • Green = Ana
    • Blue = Bind
    • Red = Cata

image

Obviously, all sorts of warnings apply. People familiar to my blog adventures will know this already, but just in case:

//
// This project is meant as an illustration of how an academically satifying layering
// of a LINQ to Objects implementation can be realized using monadic concepts and only
// three primitives: anamorphism, bind and catamorphism.
//
// The code in this project is not meant to be used in production and no guarantees are
// made about its functionality. Use it for academic stimulation purposes only. To use
// LINQ for real, use System.Linq in .NET 3.5 or higher.
//
// All of the source code may be used in presentations of LINQ or for other educational
// purposes, but references to http://www.codeplex.com/LINQSQO and the blog post referred
// to above - "The Essence of LINQ - MinLINQ" - are required.
//


Either way, if you find LINQ interesting and can stand some “brain pain of the highest quality” (a Twitter quote by dahlbyk), this will likely be something for you.

Friday, January 01, 2010  |  From B# .NET Blog

Introduction

Before reaching the catharsis in the “More LINQ with System.Interactive” series over here, I wanted to ensure a solid understanding of the essence of LINQ in my reader base. Often people forget the true essence of a technology due to the large number of auxiliary frameworks and extensions that are being provided. Or worse, sometimes a sense for the essence never materialized.

Searching for essence is nothing other than a “group by” operation, partitioning the world in fundamentals and derived portions. One succeeds in this mission if the former group is much smaller than the latter. In this post, we’ll try to reach that point for the IEnumerable<T> and IObservable<T> LINQ implementations, illustrating both are fundamentally similar (and dare I say, dual?). You can already guess much of the essence lies in the concept of monads. By the end of the post, we’ll have distilled the core of LINQ, which I refer to as MinLINQ since small is beautiful.

 

Interfaces are overrated?

While loved by object-oriented practitioners, interfaces are essentially nothing but records of functions. And functions, as we all know, are the fundamental pillars of functional programming languages. This trivial observation is illustrated below. I’ll leave it to the reader to think about various implications of the use of a (covariant) IRecord representation for objects:

class Program
{
static void Main()
{
for (var c = new Counter(); c.Get() < 10; c.Inc(1))
Console.WriteLine(c.Get());
}
}

interface IRecord<out T1, out T2>
{
T1 First { get; }
T2 Second { get; }
}

class Counter : IRecord<Func<int>, Action<int>>
{
// Data
private int _value;

// Code - explicit implementation to hide First, Second
Func<int> IRecord<Func<int>, Action<int>>.First { get { return () => _value; } }
Action<int> IRecord<Func<int>, Action<int>>.Second { get { return i => _value += i; } }

// Code - friendly "interface"
public Func<int> Get { get { return ((IRecord<Func<int>, Action<int>>)this).First; } }
public Action<int> Inc { get { return ((IRecord<Func<int>, Action<int>>)this).Second; } }
}


Why do we care? Well, it turns out that IEnumerable<T> and IObservable<T> tend to obscure the true meaning of the objects a bit by having many different methods to facilitate the task of enumeration and observation, respectively. The source of this apparent bloating is irrelevant (and in fact follows design guidelines of an object-oriented inspired framework); what matters more is to see how the two mentioned interfaces can be boiled down to their essence.



Minimalistic as we are, we’re going to drop the notion of error cases that manifest themselves through MoveNext throwing an exception and OnError getting called, respectively on IEnumerator<T> and IObserver<T>. For similar reasons of simplification, we’ll also not concern ourselves with the disposal of enumerators or subscriptions. The resulting picture looks as follows:




image


To consolidate things a bit further, we’ll collapse MoveNext/Current on the left, and OnNext/OnCompleted on the right. How so? Well, either getting or receiving the next element can provide a value or a termination signal. This is nothing but a pair of an optional value and a Boolean. Turns out we have such a thing in the framework, called Nullable<T> but since one can’t nest those guys or use them on reference types, it doesn’t help much. Instead, we’ll represent the presence or absence of a value using an Option<T> type:




public abstract class Option<T>
{
public abstract bool HasValue { get; }
public abstract T Value { get; }

public sealed class None : Option<T>
{
public override bool HasValue
{
get { return false; }
}

public override T Value
{
get { throw new InvalidOperationException(); }
}

public override string ToString()
{
return "None<" + typeof(T).Name + ">()";
}
}

public sealed class Some : Option<T>
{
private T _value;

public Some(T value)
{
_value = value;
}

public override bool HasValue
{
get { return true; }
}

public override T Value
{
get { return _value; }
}

public override string ToString()
{
return "Some<" + typeof(T).Name + ">(" + (_value == null ? "null" : _value.ToString()) + ")";
}
}
}


The subtypes None and Some are optional though convenient, hence I’ll leave them in. With this, IEnumerator<T> would boil down to an interface with a single method retrieving an Option<T>. When it returns a Some object, there was a next element and we got it; when it returns None, we’ve reached the end of the enumeration. Similar for IObserver<T>, OnNext and OnCompleted are merged into a single method receiving an Option<T>. Interfaces with a single method have a name: they’re delegates. So both those types can be abbreviated to:




IObserver<T>  –>  Action<Option<T>>
IEnumerator<T>  –> Func<Option<T>>


A quick recap: an observer is something you can give a value or tell it has reached the end of the observable object, hence it takes in an Option<T>; an enumerator is something you can get a value from but it can also signal the end of the enumerable object, hence it produces an Option<T>. In a more functional notation, one could write:




Option<T> –> ()

() –> Option<T>


Here the arrow indicates “goes to”, just as in lambda expressions, with the argument on the left and the return type on the right. All that has happened is reverting the arrows to go from an observer to an enumerator and vice versa. That’s the essence of dualization.



But we’re not done yet. Look one level up at the IEnumerable<T> and IObserver<T> interfaces. Those are single-method ones too, hence we can play the same trick as we did before. The IEnumerable<T> interface’s single method returns an IEnumerator<T>, which we already collapsed into a simple function above. And in a dual manner, IObservable<T>’s single method takes in an IObserver<T>, which we also collapsed above. The yields the following result:




IObservable<T>  –>  Action<Action<Option<T>>>
IEnumerable<T> –> Func<Func<Option<T>>>


If that isn’t a simplification, I don’t know what would be. An observable is nothing other than an action taking in an action taking in an observed value, while an enumerable is nothing other than a function returning a function returning a yielded value. Or, in concise functional notation:




(Option<T> –> ()) –> ()

() –> (() –> Option<T>)


Again, to go from one world to the other, it suffices to reverse the arrows to reach the dual form. In summary, have a look at the following figure:




image


 


Flat functions – FEnumerable and FObservable



Since we’ve flattened imperative interfaces into flat functions we’re going to provide several operators over those, we need to have a name for the type to stick those items in. Though we’re not going to make things purely functional on the inside (as we’ll rely on side-effects to implement various operators), I still like to call it function-style enumerable and observable, hence the names FEnumerable and FObservable (not meant to be pronounceable), where F stands for Function as opposed to I for Interface. In addition, Ex additions will materialize to realize some layering as discussed below. The result, including FEnumerableEx2 that’s left as an exercise, is shown below:




image


 


Five essential operators, or maybe even less



To continue on our merry way towards the essence of LINQ, we’ll be providing five essential operators as the building blocks to construct most other operators out of. Needless to say so, those operators will use the above flat function “interfaces” to do their work on. Let’s start with a couple of easy ones: Empty and Return.



 


Empty



The Empty operator is very straightforward and never deals with Option<T>.Some values, just signaling an Option<T>.None immediately to signal completion. Hence the produced collection is empty. How do we realize this operator in the enumerable and observable case? Not surprisingly, the implementation is straightforward in both cases:




public static class FEnumerable
{
public static Func<Func<Option<T>>> Empty<T>()
{
return () => () => new Option<T>.None();
}


First, the FEnumerable one. All it does is simply returning a function that returns an end-of-sequence None signal in return to getting called. Notice the two levels of nesting needed to be conform with the signature. The outer function is the one retrieving the enumerator, while the inner is the equivalent to MoveNext and Current. For absolute clarity:




image


One the FObservable side of things, we find a similar implementation shuffled around a little bit, as shown below:




public static class FObservable
{
public static Action<Action<Option<T>>> Empty<T>()
{
return o => o(new Option<T>.None());
}

What used to be output now becomes input: the None constructor call no long appears in an output position but has moved to an input position. Similar for the observer, indicated with o, which has moved to an input position. Upon giving the observable object (the whole thing) an observer (o), the latter gets simply called with a None object indicating the end of the sequence. The inner call is equivalent to OnCompleted, while the whole lambda expression is equivalent to Subscribe.




image


The careful reader may spot an apparent difference in the multiplicity of the involved operations. Where one enumerable can be used to get multiple enumerators, it seems that one observable cannot be used with multiple observers. This is only how it looks, as duality comes to the rescue to explain this again. The statement for enumerables goes as follows: “multiple calls to GetEnumerator each return one IEnumerator”. The dual of that becomes “a single call to Subscribe can take in multiple IObservers”. While that’s not exactly the case in the real IObserver land, where you either wrap all of your observers in a single IObserver to achieve this effect, or make multiple calls to Subscribe (assuming – and that’s where the MinLinq approach differs slightly – a call to subscribe doesn’t block), it’s incredibly true in FObservable. How so? Well, one can combine delegates using the + operator to achieve the effect of subscribing multiple observers at the same time:




Action<Option<int>> observer1 = x => Console.WriteLine("1 <- " + x);
Action<Option<int>> observer2 = x => Console.WriteLine("2 <- " + x);

var xs = FObservable.Return(1);
xs(observer1 + observer2);

The above will print Some(1) and None() twice, since both observers are getting it (in invocation order, coinciding with lexical order).



 


Return



The previous sample brings us seamlessly to the next operator: Return, which realizes a singleton enumerable or observable collection. Though this one seems easy as well, it’s getting a bit more complex in the enumerable case as we need to maintain state across calls to “MoveNext”. Moreover, we need to do so on a per-enumerator basis as they all need to have their own view on the sequence. In our observable case, for the reasons mentioned above, things are slightly simpler as we can just “fire and forget” all data upon receiving a call to Subscribe. (Exercise: how would you make Subscribe asynchronous with respect to the sequence producing its values? When is this useful and when is it harmful?)



Let’s first look at the Return operator realization in FEnumerable:




public static Func<Func<Option<T>>> Return<T>(T value)
{
return () =>
{
int i = 0;
return () =>
i++ == 0
? (Option<T>)new Option<T>.Some(value)
: (Option<T>)new Option<T>.None();
};
}


The state local to the “enumerator block” contains a counter that keeps track of the number of MoveNext calls that have been made. The first time, we return a Some(value) object, and the second (and subsequent) time(s) we answer with None. Notice this has the implicit contract of considering a None value as a terminal in the grammar. If you want to enforce this policy, an exception could be raised if i reaches 2.



In the FObservable world, things are quite easy. Upon a subscription call, we signal a Some and None message on the OnNext function, like this:




public static Action<Action<Option<T>>> Return<T>(T value)
{
return o =>
{
o(new Option<T>.Some(value));
o(new Option<T>.None());
};
}


 


Bind



Why says Return and knows about monads immediately thinks about Bind (>>= in Haskell). The Bind operator, known as SelectMany in LINQ, provides an essential combinator allowing to compose objects in the monad. In our case, those monads are IEnumerable<T> and IObservable<T>. In a previous episode of my More LINQ series, I’ve explained the basic idea of monadic composition a bit further, as summarized in the figure below:




image


In the above, M<.> has to be substituted for either Func<Func<.>> or Action<Action<.>> to yield the signature for both FEnumerable’s and FObservable’s Bind operators. The implementation of the operator in the latter case is the more straightforward one of both:




public static Action<Action<Option<R>>> Bind<T, R>(this Action<Action<Option<T>>> source, Func<T, Action<Action<Option<R>>>> selector)
{
return o => source(x =>
{
if (x is Option<T>.None)
{
o(new Option<R>.None());
}
else
{
selector(x.Value)(y =>
{
if (y is Option<R>.Some)
o(y);
});
}
});
}


Here, upon subscribing to an observable using observer “o”, the operator itself subscribes to the source observable that was fed in to the function. It does so by providing an observer that takes in the received element as “x”. Inside the observer’s body, which gets called for every element raised by the source, “x” is analyzed to see whether or not the source has terminated. If not, Bind does its combining work by calling the selector function for the received element, getting back a new observable source “f(x.Value)”. The goal of Bind is to surface the values raised on this source to the surface of the operator call. Hence, we subscribe to this computed source “f(x.Value)” by providing an observer that takes in the received value as “y” and raises that to the surface by calling “o” (the external observer). Again we assume None is terminating the sequence, which could be enforced by keeping a bit of state (left as an exercise). We’ll see examples of operator usage later on.



(Exercise: What if we want the Subscribe method to return immediately, running the Bind in the background. How would you do so?)



In the FEnumerable case, things get more complex as we need to keep track of where we are in the source and projected sequences across different calls to “MoveNext”. While we could realize this using a state machine (just like iterators would do), I’ve taken on the challenge to write a state-keeping set of loops by hand. It may well be optimized or tweaked but it seems to do its job. Important situations to keep in mind include encountering empty inner sequences (signaled by None), requiring us to loop till we eventually find an object to yield. It’s also important to properly return a Option<R>.None object when we reach the end of the outer source. One of the most essential parts of the code below is the storage of state outside the inner lambda, hence keeping per-enumerator state. Besides cursors into the outer and current inner sequences, we also keep the inner enumerator (recall the signature corresponding to IEnumerator<T>) in “innerE”.




public static Func<Func<Option<R>>> Bind<T, R>(this Func<Func<Option<T>>> source, Func<T, Func<Func<Option<R>>>> f)
{
return () =>
{
var e = source();

Option<T> lastOuter = new Option<T>.None();
Option<R> lastInner = new Option<R>.None();
Func<Option<R>> innerE = null;

return () =>
{
do
{
while (lastInner is Option<R>.None)
{
lastOuter = e();

if (lastOuter is Option<T>.None)
{
return new Option<R>.None();
}
else
{
innerE = f(lastOuter.Value)();
}

lastInner = innerE();
if (lastInner is Option<R>.Some)
{
return lastInner;
}
}

lastInner = innerE();
} while (lastInner is Option<R>.None);

return lastInner;
};
};
}


The reader is invited to make sense of the above at his or her own pace, keeping in mind the regular LINQ to Objects implementation is the following much more comprehensible code:




public static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> f)
{
foreach (var item in source)
foreach (var result in f(item))
yield return result;
}


The interesting thing about the SelectMany implementation is that the types in the signature exactly tell you what to do: the main operation on an IEnumerable is to enumerate using foreach. The only parameter you can do that on is source, but you can’t yield those elements as the output expects elements of type R and we got elements of type T. However, the function “f” accepts a T and produces an IEnumerable<R>, so if we call that one an enumerate the results, we got what we can yield. Simple.



This operator is essential to LINQ (and monads) in that it allows many other operators to be written in terms of it. Where and Select and two that pop to mind immediately, and we’ll come to those when we talk about FEnumerableEx (and FObservable) later.



 


Ana



An anamorphism is the fancy word for an operator that produces an M<T> out of something outside M<.>, by use of unfolding. Given some seed value and an iterator function, one can produce a potentially infinite sequence of elements. Implementation of this operator is straightforward in both cases, again with the enumerable case requiring some state:




public static Func<Func<Option<T>>> Ana<T>(T seed, Func<T, bool> condition, Func<T, T> next)
{
return () =>
{
Option<T> value = new Option<T>.None();
return () =>
condition((value = new Option<T>.Some(
value is Option<T>.None
? seed
: next(value.Value))).Value)
? (Option<T>)new Option<T>.Some(value.Value)
: (Option<T>)new Option<T>.None();
};
}

For fun and giggles I wrote this one using conditional operator expressions only, with an assignment side-effect nicely interwoven. It’s left to the reader to write it in a more imperative style. Again, we’re assuming the enumerator function is not called after a None object has been received. The basic principle of the operator is clear and implementation would look like this in regular C# with iterators:




public static IEnumerable<T> Ana<T>(T seed, Func<T, bool> condition, Func<T, T> next)
{
for (T t = seed; condition(t); t = next(t))
yield return t;
}

On the FObservable side, things are simpler again (the main reason being that FEnumerable is hard because of its lazy nature and because we can’t use iterators):




public static Action<Action<Option<T>>> Ana<T>(T seed, Func<T, bool> condition, Func<T, T> next)
{
return o =>
{
for (T t = seed; condition(t); t = next(t))
o(new Option<T>.Some(t));
o(new Option<T>.None());
};
}

Again, the reader is invited to think about what I’d take to have this sequence getting generated on the background, as opposed to blocking the caller.



As an additional exercise, can you rewrite Return and Empty in terms of Ana, therefore making those two operators no longer primitives? Doing so will bring down the total of essentials to three: Ana, Cata and Bind:




image


 


Cata



The opposite of an anamorphism is a catamorphism, also known as Aggregate in LINQ. Its goal is to fold a M<T> into something outside M<.>, e.g. computing the sum of a sequence of numbers. Since this is a greedy operation, we can do it on the spot for both the FEnumerable and FObservable cases as shown below:




public static R Cata<T, R>(this Func<Func<Option<T>>> source, R seed, Func<R, T, R> f)
{
var e = source();

Option<T>.Some value;
R result = seed;
while ((value = e() as Option<T>.Some) != null)
{
result = f(result, value.Value);
}

return result;
}

First for the enumerable case, we simply run till we get a None object, continuously calling the aggregation function, starting with the seed value. In the observable case, things are equally simple:




public static R Cata<T, R>(this Action<Action<Option<T>>> source, R seed, Func<R, T, R> f)
{
R result = seed;

bool end = false;
source(x =>
{
if (x is Option<T>.Some && !end)
result = f(result, x.Value);
else
end = true; // or break using exception
});

return result;
}


This time we have to hook up an observer with the source and analyze what we got back. Notice the code above shows one approach to break out of or immunize an observer after a None message has been received. Notice though that if all constructor functions can be trusted (which is not the case with an Action of Func), such protections wouldn’t be required as we’re defining a closed world of constructors and combinators. If the former group never emits sequences that don’t follow the described protocol and the latter never combines existing sequences into an invalid one (i.e. preserving the protocol properties), it shouldn’t be possible to fall off a cliff.



 


Bridging the brave new world with the old-school one



Before getting into more operators layered on top of the essential ones provided above, we should spend a few minutes looking at ways to convert back and forth between the new functionally inspired “flat” world and the familiar interface-centric “bombastic” world of LINQ. In particular, can we establish the following conversions?



  • IEnumerable<T> to Func<Func<Option<T>>>
  • Func<Func<Option<T>>> to IEnumerable<T>
  • IObservable<T> to Action<Action<Option<T>>>
  • Action<Action<Option<T>>> to IObservable<T>

Obviously the answer is we can. Let’s focus on the first two as a starter. It’s clear that in order to go from an IEnumerable<T> to our new world of FEnumerable we should iterate the specified sequence. We should do so in a lazy manner such that upon every call to FEnumerable’s inner function (playing the enumerator’s role) we fetch an element out of the source IEnumerator<T>, but no earlier. In other words, we have to keep the iteration state which is represented by an IEnumerator<T> as the local state to the enumerator function:




public static Func<Func<Option<T>>> AsFEnumerable<T>(this IEnumerable<T> source)
{
return () =>
{
var e = source.GetEnumerator();
return () => e.MoveNext()
? (Option<T>)new Option<T>.Some(e.Current)
: (Option<T>)new Option<T>.None();
};
}

This should be fairly straightforward code to grasp, ensuring we properly terminate a (finite) sequence with a None object to signal completion. The opposite operation is easy as well, now calling a FEnumerable’s enumerator function, providing results to the caller in a lazy fashion by means of a typical C# iterator:




public static IEnumerable<T> AsEnumerable<T>(this Func<Func<Option<T>>> source)
{
var e = source();
Option<T>.Some value;
while ((value = e() as Option<T>.Some) != null)
{
yield return value.Value;
}
}


As soon as we encounter a None object, we’ll break out of the loop causing the consuming enumerator to terminate. Using the operators above, we can readily verify the back and forth conversions easily:




// IEnumerable -> FEnumerable
var xs = Enumerable.Range(0, 10).AsFEnumerable();
{
var xse = xs();
Option<int> x;
while ((x = xse() as Option<int>.Some) != null)
Console.WriteLine(x.Value);
}

// FEnumerable -> IEnumerable
var ys = xs.AsEnumerable();
{
foreach (var y in ys)
Console.WriteLine(y);
}


This is very convenient as we’ll be able to treat arrays and other enumerable collections as FEnumerable functions in a blink of the eye. Now we can start to mix and match typical LINQ to Objects operators with our own academic playground.



On to the dual world, we can also provide conversions for IObservable<T> to FObservable<T> back and forth. Both are relatively easy to realize as well but lets starts with old to new:




public static IObservable<T> AsObservable<T>(this Action<Action<Option<T>>> source)
{
return Observable.Create<T>(o =>
{
source(x =>
{
if (x is Option<T>.Some)
o.OnNext(x.Value);
else
o.OnCompleted();
});
return () => { };
});
}


Here I’m using Rx’s Observable.Create operator to simplify the creation of an IObservable<T>, passing in an observer’s code body. Lambda parameter “o” is an IObserver<T>, so all we got to do is subscribe to our source (by means of just calling it, passing in a FObserver function) and forward received objects “x” to the external observer. As we don’t have a notion to run asynchronous in our little world, we simply return the no-op action delegate from the observer function. Since all execution happens synchronously upon a Subscribe call to the produced IObservable<T>, there’s little for us to do in a reaction to an unsubscribe invocation.



In the other direction, things are even simpler. We simply use an Rx extension method for IObservable<T> to subscribe given an OnNext and OnCompleted delegate:




public static Action<Action<Option<T>>> AsFObservable<T>(this IObservable<T> source)
{
return o =>
{
source.Subscribe(x => o(new Option<T>.Some(x)), () => o(new Option<T>.None()));
};
}


Again we can test this easily, this time using Observable.Range. Since that one runs asynchronously, we have to do a bit of synchronization to see the results printed nicely:




// IObservable -> FObservable
var evt = new ManualResetEvent(false);
var xs = Observable.Range(0, 10).AsFObservable();
{
xs(x =>
{
if (x is Option<int>.Some)
Console.WriteLine(x.Value);
else
evt.Set();
});
}
evt.WaitOne();

// FObservable -> IObservable
var ys = xs.AsObservable();
{
// We got this one synchronous inside.
ys.Subscribe(Console.WriteLine);
}


The result of all this plumbing is summarized in the following diagram. The direct conversion between a FEnumerable and FObservable (and vice versa) is left to the reader as an interesting exercise:




image 



 

Where and Select for monadic dummies



While we leave the implementation of operators like Snoc (Cons in reverse, to construct sequences out of a single element and a sequence) and Concat (concatenating arbitrary sequences to one another) to the reader, we should focus on a few operators that can be realized using the essential building blocks provided before. In particular, we’ll implement Where and Select in terms of Bind, Empty and Return.



Recall what Bind does: it combines a sequence with sequences generated from a function call, collecting the elements all sequences that result from those function calls. In a concrete sample: given a list of products and a way to get all the suppliers for each product we can return a sequence of all suppliers across all products. Or with function arrows: IE<Product> –> (Product –> IE<Supplier>) –> IE<Supplier>. This is exactly the signature of Bind or SelectMany.



How can we use this to create a filter like Where? The answer is pretty simple, by controlling the “selector” function passed to Bind and make it analyze each element that’s passed in, deciding whether or not to return it to Bind. The “whether or not” part can be realized using a conditional either returning Return(element) or Empty(). And there we got our filtering logic:




public static Func<Func<Option<T>>> Where<T>(this Func<Func<Option<T>>> source, Func<T, bool> filter)
{
return source.Bind(t => filter(t) ? FEnumerable.Return(t) : FEnumerable.Empty<T>());
}

A picture is worth a thousand words, so let’s have a look at the Where operator realization in terms of Bind:




image


And guess what, the FObservable implementation can be derived by mechanical translation from the one for FEnumerable:




public static Action<Action<Option<T>>> Where<T>(this Action<Action<Option<T>>> source, Func<T, bool> filter)
{
return source.Bind(t => filter(t) ? FObservable.Return(t) : FObservable.Empty<T>());
}

In fact, the code is exactly the same with FEnumerable replaced by FObservable. If we’d have typedefs for the function signatures or static extension methods on a delegate type, we’d actually see both pieces of code being the same of the following “template”:




public static M<T> Where<T>(this M<T> source, Func<T, bool> filter)
{
return source.Bind(t => filter(t) ? M<T>.Return(t) : M<T>.Empty());
}


Such an M<T> abstraction would be realized as a type constructor in Haskell and the packaging of both Return and Bind on M<T> would be realized by means of a type class that looks as follows:




class Monad m where

    return :: a –> m a


    (>>=)  :: m a –> (a –> m b) –> m b


The second function is Haskell’s infix operator for bind. More information ca be found at the following locations:



How can we realize Select using Bind and Return as well? The answer is again very straightforward: this time we simply apply the projection function to the object passed to the bind selector function and wrap the result using Return. Here’s the code for both worlds, again ready to be abstracted to M<T>:




public static Func<Func<Option<R>>> Select<T, R>(this Func<Func<Option<T>>> source, Func<T, R> selector)
{
return source.Bind(t => FEnumerable.Return(selector(t)));
}


public static Action<Action<Option<R>>> Select<T, R>(this Action<Action<Option<T>>> source, Func<T, R> selector)
{
return source.Bind(t => FObservable.Return(selector(t)));
}


Again a picture will make the above more clear:




image


With those extension methods in place, we can actually start writing LINQ expressions against FEnumerable and FObservable (function!) objects. That’s right: now you got a delegate you can dot into, thanks to the magic of extension methods. But using convenient LINQ syntax, we don’t even have to see any of that:




var res = (from x in Enumerable.Range(0, 10).AsFEnumerable()
where x % 2 == 0
select x + 1).AsEnumerable();

foreach (var x in res)
Console.WriteLine(x);


Notice how we go back and forth between classic IEnumerable<T> and our FEnumerable implementation? But the key to see here is that our Where and Select operators are getting called. The result obviously prints 1, 3, 5, 7, 9 and to convince ourselves or calls happening to our methods, we’ll have a look in the debugger:




image


I hope this suffices to convince the reader we got query expression syntax working around our MinLINQ implementation. It’s left to the reader to decipher the exact call stack we’re observing above. The same exercise can be repeated for the FObservable case, using the following equivalent code:




var res = (from x in Observable.Range(0, 10).AsFObservable()
where x % 2 == 0
select x + 1).AsObservable();

res.Subscribe(Console.WriteLine);
Console.ReadLine(); // Stuff happening on the background; don't exit yet


Since Bind is none other than SelectMany in disguise, we could rename it as such to enable it for use in LINQ as well, triggered by query expressions having multiple from clauses. In fact, to fully enable query expressions of that form, you’ll need a slight tweak to the SelectMany signature, as follows (same for the observable case of course):




public static Func<Func<Option<R>>> SelectMany<T, C, R>(this Func<Func<Option<T>>> source, Func<T, Func<Func<Option<C>>>> selector, Func<T, C, R> result)
{
// Left as an exercise.
}

If you implement this one correctly, you will be able to run a query of the following shape:




var res = (from x in Enumerable.Range(1, 5).AsFEnumerable()
from y in Enumerable.Range(1, x).AsFEnumerable()
select new string((char)('a' + x - 1), y)).AsEnumerable();

foreach (var item in res)
Console.WriteLine(item);

This will print the following output:




a

b


bb


c


cc


ccc


d


dd


ddd


dddd


e


ee


eee


eeee


eeeee


Finally, just to go nuts with some back-and-forth transitioning between all worlds (as shown in our diagram before), an all-inclusive sample mixing all sorts of execution:




var res = (from y in
(from x in Enumerable.Range(0, 20).AsFEnumerable()
where x % 2 == 0
select x + 1).AsEnumerable()
.ToObservable() // Rx
.AsFObservable()
where y % 3 == 0
select y * 2)
.AsObservable()
.ToEnumerable(); // Rx

foreach (var item in res)
Console.WriteLine(item);


The interested reader is invited to create short-circuiting operators to provide a direct path for .AsEnumerable().ToObservable().AsFObservable() and .AsObservable().ToEnumerable().AsFEnumerable(). Refer back to the diagram to see where those operators’ corresponding arrows occur.



 


Fueling Range and Sum with Ana and Cata



To conclude this post, let’s also have a look at how to derive constructor and aggregator operators from our Ana and Cata primitives. As a sequence constructor we’ll consider Range and for the aggregator we’ll consider Sum. Let’s start with Range in terms if Ana:




public static Func<Func<Option<int>>> Range(int from, int length)
{
return FEnumerable.Ana<int>(from, x => x < from + length, x => x + 1);
}


and (again exactly the same code thanks to the shared primitives)




public static Action<Action<Option<int>>> Range(int from, int length)
{
return FObservable.Ana<int>(from, x => x < from + length, x => x + 1);
}

Now we can get rid of the AsFEnumerable() use in our samples when creating a range and construct our range sequence immediately in our world (similar example for FObservable of course):




var res = (from x in FEnumerableEx.Range(0, 10)
where x % 2 == 0
select x + 1).AsEnumerable();

foreach (var x in res)
Console.WriteLine(x);


As an exercise, also abstract the AsEnumerable call followed by foreach into a Run method, as seen in System.Interactive, so that you can write the code below. Implement this operator in terms of Cata (!):




(from x in FEnumerableEx.Range(0, 10)
where x % 2 == 0
select x + 1).Run(
Console.WriteLine
);


(Question: could you benefit from such an operator in FObservable as well?)



For the Sum realization we can use Cata:




public static int Sum(this Func<Func<Option<int>>> source)
{
return source.Cata(0, (sum, x) => sum + x);
}


and




public static int Sum(this Action<Action<Option<int>>> source)
{
return source.Cata(0, (sum, x) => sum + x);
}


The following example illustrates how to sum 1 to 10 using Range and Sum:




Console.WriteLine(FEnumerableEx.Range(1, 10).Sum());
Console.WriteLine(FObservableEx.Range(1, 10).Sum());

Both print 55 just fine.



Implement more aggregation operators as found in the Standard Query Operators. Also think about how to implement those over nullable value types (e.g. Sum with int?). Could you reuse Option<T> as an alternative to nullables? Could you reuse monadic computation to carry out nullable arithmetic (tip: the Maybe monad)? A few aggregates that some people don’t see as aggregates include All, Any, First, Last, ElementAt, and more. Don’t forget to implement those either (most of them should be a one-liner making a single call to Cata). As an additional caveat, the following implementation of Average is inadequate (why?):




public static double Average(this Func<Func<Option<int>>> source)
{
return (double)source.Sum() / source.Count();
}

 


Conclusion



Boiling down LINQ to its core essence can be fun and a great eye-opener to many users of the technology. While optimizations often mandate a lower degree of layering, it’s good to have an idea of the conceptual layering of various operators to see which ones are essential and which ones are not so much. If kids can build castles out of Lego blocks, sure every self-respecting developer should be able to exploit the expressive power a few primitive building blocks to create great libraries and applications. Choosing the right set of primitives can get you a long way in such a design, as illustrated in this post. Readers who can’t get enough of essential primitives and the composition thereof are cordially invited to have a go at another Crazy Sunday post titled Unlambda .NET – With a Big Dose of C# 3.0 Lambdas (and many others in that category).



In the continuation of my “More LINQ with System.Interactive” series we’ll get back to less academic stuff with System.Interactive. And before I forget: a happy 2010!

Wednesday, December 30, 2009  |  From B# .NET Blog

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about new combinator operators provided by EnumerableEx:

image

 

Combine and conquer?

Combinators are at the heart of LINQ’s expressive power, allowing sequences to be combined into new ones. In earlier posts, I’ve shown the essence of monadic computation through the following illustration:

image

It’s fair to say that SelectMany (or Bind) is the mother of all combinators, as many others can be derived from it (Exercise: implement Where and Select using SelectMany and a limited number of auxiliary operators like Return). In today’s post we’ll look at various new combinators added to the IEnumerable<T> set of operators.

So, what’s a combinator? In one world view (the one we’re using), it’s an operator that combines one or more instances of a given entity into a new such entity. For example, in functional programming we got S, K and I combinators that act on functions:

S x y z = (x z) y z
K x y = x
I x = x

A more precise definition can be found on http://en.wikipedia.org/wiki/Combinator, for those interested in more foundational stuff. In our case, we’ll combine one or more IEnumerable<T> instances into a new IEnumerable<R> (where R can be different from T).

 

Concat, now with more arguments

LINQ to Objects has always had a Concat operator, with the following signature:

public static IEnumerable<TSource> Concat<TSource>(this IEnumerable<TSource> first, IEnumerable<TSource> second);


However, this is merely a special case of a more general version of Concat, introduced in EnumerableEx:




public static IEnumerable<TSource> Concat<TSource>(params IEnumerable<TSource>[] sources);
public static IEnumerable<TSource> Concat<TSource>(this IEnumerable<IEnumerable<TSource>> sources);


The second one is the core operator we’re talking about here, with the first overload providing convenience due to the lack of a “params enumerable” feature in the language. The Concat operator is simple to understand, simply yielding all TSource objects from all sequences in the sources parameter. If an error occurs during enumeration any of the sequences, the resulting concatenated sequence is also terminated for yielding. In fact, this operator is very similar to OnErrorResumeNext where the error condition is ignored.



Below is a sample illustrating the main scenarios:




new[] {
new[] { 1, 2 },
new[] { 3, 4 },
new[] { 5, 6 }
}
.Concat()
.Materialize(/* for pretty printing */)
.Run(Console.WriteLine);

new[] {
new[] { 1, 2 },
new[] { 3, 4 }.Concat(EnumerableEx.Throw<int>(new Exception())),
new[] { 5, 6 }
}
.Concat()
.Materialize(/* for pretty printing */)
.Run(Console.WriteLine);

The first sample will print numbers 1 through 6, while the second one will print 1 through 4 and an error notification.



image



 




Merge, a parallel Concat



Where Concat will proceed through the sources collection sequentially, guaranteeing in-order retrieval of data, one could get all the data from the sources in a parallel manner as well. To do so, Merge spawns workers that drain all of the sources in parallel, flattening or “sinking” all the results to the caller:




public static IEnumerable<TSource> Merge<TSource>(params IEnumerable<TSource>[] sources);
public static IEnumerable<TSource> Merge<TSource>(this IEnumerable<IEnumerable<TSource>> sources);
public static IEnumerable<TSource> Merge<TSource>(this IEnumerable<TSource> leftSource, IEnumerable<TSource> rightSource);


The three overloads share the same signatures as the Concat equivalents, with the second one being the most general overload again. The same sample as for Concat can be used to illustrate the working:




new[] {
new[] { 1, 2 },
new[] { 3, 4 },
new[] { 5, 6 }
}
.Merge()
.Materialize(/* for pretty printing */)
.Run(Console.WriteLine);

new[] {
new[] { 1, 2 },
new[] { 3, 4 }.Concat(EnumerableEx.Throw<int>(new Exception())),
new[] { 5, 6 }
}
.Merge()
.Materialize(/* for pretty printing */)
.Run(Console.WriteLine);

What the results are will depend on the mood of your task scheduler. Either way, for the first sample you should get to see all of the numbers from 1 through 6 getting printed, in any order (though 1 will come before 2, 3 before 4 and 5 before 6). On my machine I got 1, 3, 5, 4, 2, 6 in my first run. For the second sample, it’s entirely possible to see 5 and 6 getting printed before the exception for the second source is reached. But then that’s what you expect from parallel computation, don’t you?



Merge can speed up your data retrieval operations significantly, if you don’t care about the order in which results are returned. For example, you could cause two LINQ to SQL queries that provide stock quotes to run in parallel by using Merge, followed by a client-side duplicate entry elimination technique:




var stocks =
from quote in
EnumerableEx.Merge(
(from quote in t1 select quote).Do(q => Console.WriteLine("t1: " + q)),
(from quote in t2 select quote).Do(q => Console.WriteLine("t2: " + q))
)
group quote by quote.Symbol into g
select new { g.Key, Price = g.Average(p => p.Price) };

stocks.Run(Console.WriteLine);


Results could look as follows, with the main idea being the parallel retrieval of both query results:




Query: SELECT Symbol, Price FROM Trader1

Query: SELECT Symbol, Price FROM Trader2


t2: { Symbol = MSFT, Price = 30.94 }


t1: { Symbol = MSFT, Price = 30.99 }


t1: { Symbol = ORCL, Price = 24.92 }


t1: { Symbol = GOOG, Price = 618.35 }


t1: { Symbol = AAPL, Price = 209.10 }


t2: { Symbol = ORCL, Price = 25.06 }


t2: { Symbol = GOOG, Price = 610.25 }


t2: { Symbol = AAPL, Price = 204.99 }


{ Key = MSFT, Price = 30.965 }


{ Key = ORCL, Price = 24.99 }


{ Key = GOOG, Price = 614.30 }


{ Key = AAPL, Price = 207.045 }


image



(Note: behavior in face of an exception will depend on timing and is not included in the diagram.)



 


Amb, a racing game



Amb is the ambiguous operator as introduced by McCarthy in 1963. Because of its nostalgic background, it’s been chosen to preserve the name as-is instead of expanding it. What’s so ambiguous about this operator? Well, the idea is that Amb allows two sequences to race to provide the first result causing the winning sequence to be elected as the one providing the resulting sequence from the operator call. The operator’s signatures make this clear:




public static IEnumerable<TSource> Amb<TSource>(params IEnumerable<TSource>[] sources);
public static IEnumerable<TSource> Amb<TSource>(this IEnumerable<IEnumerable<TSource>> sources);
public static IEnumerable<TSource> Amb<TSource>(this IEnumerable<TSource> leftSource, IEnumerable<TSource> rightSource);


Again, the overloads are threesome, just like Concat and Merge. To provide a sample of the operator’s behavior, use the following simple implementation of a Delay operator:




public static IEnumerable<TSource> Delay<TSource>(this IEnumerable<TSource> source, int delay)
{
return EnumerableEx.Defer(() => { Thread.Sleep(delay); return source; });
}

Now we can write the following two test cases:




var src1 = new[] { 1, 2 }.Delay(300);
var src2 = new[] { 3, 4 }.Delay(400);
src1.Amb(src2).Run(Console.WriteLine);

var src3 = new[] { 5, 6 }.Delay(400);
var src4 = new[] { 7, 8 }.Delay(300);
src3.Amb(src4).Run(Console.WriteLine);

The expected result will be that src1 and src4 win their Amb battles against src2 and src3, respectively. One practical use for this operator is to have two or more redundant data sources, all containing the same data, fight to provide the quickest answer to a query. Here’s a sample illustrating this:




var stocks =
EnumerableEx.Amb(
(from quote in t1 select quote).Do(q => Console.WriteLine("t1: " + q)),
(from quote in t2 select quote).Do(q => Console.WriteLine("t2: " + q))
);

stocks.Run(Console.WriteLine);


Results could look as follows, assuming t2 was the quickest to provide an answer:




Query: SELECT Symbol, Price FROM Trader1

Query: SELECT Symbol, Price FROM Trader2


t2: { Symbol = MSFT, Price = 30.94 }


t2: { Symbol = ORCL, Price = 25.06 }


t2: { Symbol = GOOG, Price = 610.25 }


t2: { Symbol = AAPL, Price = 204.99 }


{ Key = MSFT, Price = 30.94 }


{ Key = ORCL, Price = 25.06 }


{ Key = GOOG, Price = 610.25 }


{ Key = AAPL, Price = 204.99 }


image







 


Repeat, again and (maybe) again



The purpose of Repeat is self-explanatory and could be seen as a constructor function as well. Two categories of overloads exists: one that takes a single element and an optional repeat count (unspecified = infinite) and another that takes a sequence and an optional repeat count. While the former is more of a constructor, the latter is more of a combinator over a single input sequence:



        public static IEnumerable<TSource> Repeat<TSource>(this IEnumerable<TSource> source);
public static IEnumerable<TSource> Repeat<TSource>(TSource value);
public static IEnumerable<TSource> Repeat<TSource>(this IEnumerable<TSource> source, int repeatCount);
public static IEnumerable<TSource> Repeat<TSource>(TSource value, int repeatCount);



Samples don’t need much further explanation either:




EnumerableEx.Repeat(1).Take(5).Run(Console.WriteLine);
EnumerableEx.Repeat(2, 5).Run(Console.WriteLine);

new[] { 3, 4 }.Repeat().Take(4).Run(Console.WriteLine);
new[] { 5, 6 }.Repeat(2).Run(Console.WriteLine);

It goes almost without saying that an input sequence causing an exception will also terminate the enumeration of a repeated form of the same sequence:




new[] { 5, 6 }.Concat(EnumerableEx.Throw<int>(new Exception())).Repeat(2).Run(Console.WriteLine);


image



 


Zip ‘em together



Introduced in .NET 4.0, I’ve covered the new Zip operator already in my earlier post on C# 4.0 Feature Focus - Part 3 - Intermezzo: LINQ's new Zip operator. Rx ports back this operator to the .NET 3.5 System.Interactive library for consistency. In summary, Zip walks two sequences hand-in-hand, combing their respective yielded elements using a given function to produce a result. The signature is as follows:




public static IEnumerable<TResult> Zip<TFirst, TSecond, TResult>(this IEnumerable<TFirst> first, IEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector);


A simple example is shown below:




Enumerable.Range(1, 26).Zip(
"abcdefghijklmnopqrstuvwxyz",
(i, c) => "alpha[" + i + "] = " + c
).Run(Console.WriteLine);


In here, the first sequence is an IEnumerable<int> and the second one is a string, hence an IEnumerable<char>. The result is a table of mappings between numbers and letters. As an exercise, implement the following overload of Select using Zip and Generate, in terms of the more commonly used overload of Select that doesn’t take a position in the selector function:




public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, int, TResult> selector);


One thing that’s interesting about the interactive version of Zip is its left-to-right characteristic with regards to enumeration of first and second. Internally, it does something along the following lines:




while (first.MoveNext() && second.MoveNext())


    …


In other words, “first” is dominant in that it can prevent a MoveNext call on second from happening, e.g. because of an exception getting thrown, non-termination (stuck forever) and termination (returning false). The following matrix shows the implications of this:



image



It’s left as an exercise to the reader to implement the right-hand side behavior (notice the transposition symmetry!) for fun, where a Zip could fetch results from both sources simultaneously, combining their results or exceptions into produced results. What are advantages and disadvantages of such an approach? As an additional question, think about ways to detect and report an asymmetric zip, where one of both sides still has an element while the other side has signaled termination.



Finally, the diagram illustrating some of the regular operations of Zip. Other combinations of behavior can be read from the matrix above.



image



 


Scan, a running aggregation operator



Readers familiar with the LINQ to Objects APIs will know about the Aggregate operator, which we also mentioned before when talking about the new Generate operator (as the opposite of Aggregate). Aggregate “folds” or reduces a sequence of elements into a single value, eating the elements one by one using some specified function. However, sometimes you may not want to loose the intermediate results, e.g. if you want to compute a running sum or so. Scan allows you to do so:




public static IEnumerable<TSource> Scan<TSource>(this IEnumerable<TSource> source, Func<TSource, TSource, TSource> accumulator);
public static IEnumerable<TAccumulate> Scan<TSource, TAccumulate>(this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, TAccumulate> accumulator);


You’ll see big similarities with the existing Aggregate operator when looking at the signatures above, and use of the operator is straightforward as well:




Enumerable.Range(1, 10)
.Scan((sum, i) => sum + i)
.Run(Console.WriteLine);

Enumerable.Range(2, 9).Reverse()
.Scan(3628800, (prod, i) => prod / i)
.Run(Console.WriteLine);


The first sample will simply print 1, 1+2 = 3, 3+3 = 6, 6+4 = 10, … In the second sample, a seed value is used to illustrate an inverse factorial computation, dividing a given value by subsequent descending values (from 10 to 2).



image



 


SelectMany











Finally, as a honor to the monadic bind operator, a new overload was added for SelectMany :-). Its signature is shown below, and it’s left to the reader to figure out what it does (simple):




public static IEnumerable<TOther> SelectMany<TSource, TOther>(this IEnumerable<TSource> source, IEnumerable<TOther> other);

 


Next on More LINQ



Functionally inspired constructs allowing to share enumerables and tame their side-effects.

Tuesday, December 29, 2009  |  From B# .NET Blog

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about the materialization and dematerialization operators provided by EnumerableEx:

image

 

von Neumann was right

Code and data are very similar animals, much more similar than you may expect them to be. We can approach this observation from two different angles, one being a machine-centric view. Today’s computers are realizations of von Neumann machines where instructions and data are treated on the same footage from a memory storage point of view. While this is very useful, it’s also the source of various security-related headaches such as script or SQL injection and data execution through e.g. stack overruns (Data Execution Prevention is one mitigation).

Another point of view goes back to the foundational nature of programming, in particular the essentials of functional programming, where functions are used to represent data. An example are Church numerals, which are functions that are behaviorally equivalent to natural numbers (realized by repeated application of a function, equal in number to the natural number being represented). This illustrates how something that seems exclusively code-driven can be used to represent or mimic data.

If the above samples seem far-fetched or esoteric, there are a variety of more familiar grounds where the “code as data” paradigm is used or exploited. One such sample is LISP where code and data representation share the same syntactical form and where the technique of quotation can be used to represent a code snippet as data for runtime inspection and/or manipulation. This is nothing other than meta-programming in its earliest form. Today we find exactly the same principle back in C#, and other languages, through expression trees. The core property here is so-called homo-iconicity, where code can be represented as data without having to resort to a different syntax (homo = same; iconic = appearance):

Func<int, int> twiceD = x => x * 2;
Expression<Func<int, int>> twiceE = x => x * 2;

What what does all of this have to do with enumerable sequences? Spot on! The matter is that sequences seem to be a very data-intensive concept, and sure they are. However, the behavior and realization of such sequences, e.g. through iterators, can be very code-intensive as well, to such an extent that we introduced means to deal with exceptions (Catch for instance) and termination (Repeat, restarting after completing). This reveals that it’s useful to deal with all possible states a sequence can go through. Guess what, state is data.

 

The holy trinity of IEnumerator<T> and IObserver<T> states

In all the marble diagrams I’ve shown before, there was a legend consisting of three potential states an enumerable sequence can go through as a result of iteration. Those three states reflect possible responses to a call to MoveNext caused by the consumer of the sequence:

image

In the world of IObserver<T>, the dual to IEnumerator<T> as we saw in earlier episodes, those three states are reflected in the interface definition directly, with three methods:

// Summary:
// Supports push-style iteration over an observable sequence.
public interface IObserver<T>
{
// Summary:
// Notifies the observer of the end of the sequence.
void OnCompleted();
//
// Summary:
// Notifies the observer that an exception has occurred.
void OnError(Exception exception);
//
// Summary:
// Notifies the observer of a new value in the sequence.
void OnNext(T value);
}

Instead of having an observer getting called on any of those three methods, we could equally well record the states “raised” by the observable, which turns calls (code) into object instances (data) of type Notification<T>. This operation is called materialization. Thanks to dualization, the use of Notification<T> can be extended to the world of enumerables as well.




image


Notification<T> is a discriminated union with three notification kinds, reflecting the three states we talked about earlier:




public enum NotificationKind
{
OnNext = 0,
OnError = 1,
OnCompleted = 2,
}

 


It’s a material dual world



Materialization is the act of taking a plain enumerable and turning it into a data-centric view based on Notification<T>. Dematerialization reverses this operation, going back to the code-centric world. Thanks to this back-and-forth ability between the two worlds of code and data, we get the ability to use LINQ over notification sequences and put the result back into the regular familiar IEnumerable<T> world. A figure makes this clear:




image


The power of this lies in the ability to use whatever domain is more convenient to perform operations over a sequence. Maybe you want to do thorough analysis of error conditions, corresponding to the Error notification kind, or maybe it’s more convenient to create a stream of notification objects before turning them into a “regular” sequence of objects that could exhibit certain additional behavior (like error conditions). This is exactly the same as the tricks played in various other fields, like mathematics where one can do Fourier analysis either in the time of frequency domain. Sometimes one is more convenient than the other; all that counts is to know there are reliable ways to go back and forth.




image



(Note: For the Queryable sample, you may want to end up in the bottom-right corner, so the AsQueryable call is often omitted.)


 


Materialize and Dematerialize



What remains to be said in this post are the signatures of the operators and a few samples. First, the signatures:




public static IEnumerable<Notification<TSource>> Materialize<TSource>(this IEnumerable<TSource> source);
public static IEnumerable<TSource> Dematerialize<TSource>(this IEnumerable<Notification<TSource>> source);

An example of materialization is shown below, where we take a simple range generator to materialize. We expect to see OnNext notifications for all the numeric values emitted, terminated by a single OnCompleted call:




Enumerable.Range(1, 10)
.Materialize()
.Run(Console.WriteLine);


This prints:




OnNext(1)

OnNext(2)


OnNext(3)


OnNext(4)


OnNext(5)


OnNext(6)


OnNext(7)


OnNext(8)


OnNext(9)


OnNext(10)


OnCompleted()


A sample where an exception is triggered by the enumerator is shown below. Notice the code won’t blow up when enumerating over the materialized sequence: the exception is materialized as a passive exception object instance in an error notification.




Enumerable.Range(1, 10).Concat(EnumerableEx.Throw<int>(new Exception()))
.Materialize()
.Run(Console.WriteLine);


The result is as follows:




OnNext(1)

OnNext(2)


OnNext(3)


OnNext(4)


OnNext(5)


OnNext(6)


OnNext(7)


OnNext(8)


OnNext(9)


OnNext(10)


OnError(System.Exception)


Starting from a plain IEnumerable<T> the grammar of notifications to be expected is as follows:




OnNext* ( OnCompleted | OnError )?


In the other direction, starting from the world of IEnumerable<Notification<T>> one can write a different richer set of sequence defined by the following grammar:




( OnNext | OnCompleted | OnError )*


For example:




var ns = new Notification<int>[] {
new Notification<int>.OnNext(1),
new Notification<int>.OnNext(2),
new Notification<int>.OnCompleted(),
new Notification<int>.OnNext(3),
new Notification<int>.OnNext(4),
new Notification<int>.OnError(new Exception()),
new Notification<int>.OnNext(5),
};


Dematerializing this sequence of notifications will produce an enumerable sequence that will run no further than the first OnCompleted or OnError:




ns
.Dematerialize()
.Run(Console.WriteLine);


This prints 1 and 2 and then terminates. The reason this can still be useful is to create a stream of notifications that will be pre-filtered before doing any dematerialization operation on it. For example, a series of batches could be represented in the following grammar:




( OnNext* OnCompleted )*


If the user requests to run n batches, the first n – 1 OnCompleted notifications can be filtered out using some LINQ query expression, before doing dematerialization.



Finally, a sample of some error-filtering code going back and forth between IEnumerable<T> and IEnumerable<Notification<T>> showing practical use for those operators when doing sophisticated error handling:




var xs1 = new[] { 1, 2 }.Concat(EnumerableEx.Throw<int>(new InvalidOperationException()));
var xs2 = new[] { 3, 4 }.Concat(EnumerableEx.Throw<int>(new ArgumentException()));
var xs3 = new[] { 5, 6 }.Concat(EnumerableEx.Throw<int>(new OutOfMemoryException()));
var xs4 = new[] { 7, 8 }.Concat(EnumerableEx.Throw<int>(new ArgumentException()));

var xss = new[] { xs1, xs2, xs3, xs4 };
var xns = xss.Select(xs => xs.Materialize()).Concat();

var res = from xn in xns
let isError = xn.Kind == NotificationKind.OnError
let exception = isError ? ((Notification<int>.OnError)xn).Exception : null
where
!isError || exception is OutOfMemoryException
select xn;

res.Dematerialize().Run(Console.WriteLine);

Given some input sequences, we materialize and concatenate all of them into sequence xns. Now we write a LINQ query over the notifications to filter out exceptions, unless the exception is a critical OOM one (you could add others to this list). The result is we see 1 through 6 being printed to the screen. (Question: What’s the relationship to OnErrorResumeNext that we saw in the previous post? What’s similar, what’s different?)



 


Exercises



As an exercise, try to implement the following operators in a notification-oriented manner:



  1. Catch

    (tip: use SelectMany and lots of conditional BLOCKED EXPRESSION
  2. Finally

    (tip: use SelectMany and Defer)
  3. OnErrorResumeNext – overload taking two IEnumerable<TSource> sequences

    (tip: use TakeWhile)
  4. Retry – overload with a retry count

    (tip: recursion, ignore stack overflow conditions)

The skeleton code for those operators is shown below:




return
source
.Materialize()
// Your stuff here
.Dematerialize();


All-inclusive unit test:




    new[] { 1, 2 }
.Finally(() => Console.WriteLine("Finally inner"))
.Concat(EnumerableEx.Throw<int>(new InvalidOperationException()))
.Catch((InvalidOperationException _) => new[] { 3, 4 }.Concat(EnumerableEx.Throw<int>(new Exception())))
.Finally(() => Console.WriteLine("Finally outer"))
.OnErrorResumeNext(new[] { 5, 6 })
.Concat(EnumerableEx.Throw<int>(new ArgumentException()))
.Retry(2)
.Run(Console.WriteLine);


This should produce the same results with the built-in operators and with your implementation of those operators. More specifically, the result has to be:




1

2


Finally inner


3


4


Finally outer


5


6


1


2


Finally inner


3


4


Finally outer


5


6


with no exception leaking to the surface in the call site (behavior of Retry after the retry count has been exceeded).



 


Next on More LINQ



Various combinators to combine or transform existing observable sources into others.

Monday, December 28, 2009  |  From B# .NET Blog

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about constructor operators provided by EnumerableEx:

image

 

Constructing sequences

In order to perform operations over sequences using various combinators and operators, it’s obviously a prerequisite to have such sequences available. While collection types in the .NET Framework implement IEnumerable<T> (or the non-generic counterpart, bridgeable to LINQ using the Cast<T> Standard Query Operator), one often wants to construct sequences on the spot. Moreover, sequences often should have a lazy nature as their persistence in memory may be problematic or infeasible (infinite sequences). For all those reasons, constructor operators come in handy.

LINQ to Objects already has a constructor function called Enumerable.Range to produce a sequence with a integral numbers starting from a certain value, returning the asked amount of numbers lazily:

// Imperative
for (int i = 0; i < 10; i++)
{
Console.WriteLine(i);
}

// LINQish
Enumerable.Range(start: 0, count: 10).Run
(
Console.WriteLine
);

The lazy nature should not be underestimated, as one could create infinite sequences representing the potential to produce a certain (ordered) set of objects. When combined with other restriction operators it becomes possible to use composition to limit the produced results in a manner very close to the domain we’re talking about. For example, positive natural numbers are integer numbers larger or equal to zero. Numbers starting with 5 are the numbers, capped by means of a Skip operation or something similar. Taking a number of elements can be done using Take. Without deviating too much from our today’s blogging mission, here’s what I’m alluding to:




static IEnumerable<int> Integer()
{
for (int i = int.MinValue; i < int.MaxValue; i++)
yield return i;

yield return int.MaxValue;
}



var ints = Integer();
var nats = from i in ints where i >= 0 select i;
var some = nats.Skip(5).Take(5); // Good luck :-)
some.Run(Console.WriteLine);


I’ll leave it to the reader as a challenge to come up with ways to optimize this in a variety of ways whilst preserving the declarative nature on the use site (i.e. make the sarcastic “Good luck” go away).



Back to Rx: in today’s installment we’ll look at various constructor functions in EnumerableEx.



 


Return and the cruel return of the monad



The simplest constructor function is Return, simply yielding the single value specified on demand. It’s similar to a one-element array and that’s about it from a practical point of view:




public static IEnumerable<TSource> Return<TSource>(TSource value);

You should be able to guess the implementation of the operator for yourself. Use is straightforward as shown below:




EnumerableEx.Return(42).Run(Console.WriteLine);


One interesting thing about this constructor function is its signature, going from TSource to IEnumerable<TSource>. This is nothing but the return function (sometimes referred to as unit) used on a monad, with a more general signature of T to M<T>, the little brother to the bind function which has signature M<T> –> (T –> M<R>) –> M<R>, also known as SelectMany in LINQ. The triplet (known as a Kleisli triple) of the type constructor M (in LINQ the particular cases of IEnumerable<T> and IQueryable<T> are used, i.e. not a general type constructor), the unit and bind function form a monad.




image


For a great overview of Language Integrated Monads, have a look at Wes Dyer’s The Marvels of Monads post. For a more foundational paper (with lots of applications though), have a look at Philip Wadler’s Monads for Functional Programming paper.



 


Throw me an exception please



Another singleton constructor is the Throw function that we’ve seen repeatedly in the previous post on exception handling over sequences. Its role is to provide an enumerable that will throw an exception upon the first MoveNext call during enumeration:




public static IEnumerable<TSource> Throw<TSource>(Exception exception);

In fact, this is a lazily thrown exception constructor. Use is simple again:




EnumerableEx.Throw<int>(new Exception()).Run();

Notice you got to specify the element type for the returned (never-yielding) sequence as we’re constructing an IEnumerable<T> and there’s no information to infer T from. Obviously, the resulting sequence can be combined with other sequences of the same type in various places, e.g. using Concat. Below is a sample of how to use the Throw constructor with SelectMany to forcefully reject even numbers in a sequence (rather than filtering them out):




var src = Enumerable.Range(1, 10);//.Where(i => i % 2 != 0);
var res = src.SelectMany(i =>
i % 2 == 0
? EnumerableEx.Throw<int>(new Exception("No evens please!"))
: EnumerableEx.Return(i)
);
res.Run(Console.WriteLine);

Here we use the conditional operator to decide between an exception throwing sequence or a singleton element sequence (in this case, “Many” in “SelectMany” has “Single” semantics).



 


Empty missing from the triad



For completeness, we could have provided an Empty constructor as well, with the following signature and implementation:




public static IEnumerable<TSource> Empty<TSource>()
{
yield break;
}

There seems little use for this though I challenge the reader to use this one to build the Where operator using SelectMany. In fact, the reason I say “for completeness” is illustrated below:



image



 


StartWith = Snoc (or Cons in disguise)



People familiar with LISP, ML, Scala, and many other functional languages, will know the concept of cons by heart. Cons is nothing but the abbreviation for “construct” used to create a bigger list (in LISP lingo) out of an existing list and an element to be prepended:




(cons 1 (cons 2 nil))


The above creates a list with 1 as the head and (cons 2 nil) as the tail, which by itself expands into a cell containing 2 and a tail with the nil (null) value. The underlying pair of the head value and tail “reference” to the tail list is known as a cons cell. Decomposition operators exist, known as car and cdr (from old IBM machine terminology where cons cells were realized in machine words consisting of a so called “address” and “decrement” register, explaining the a and d in car and cdr – c and r stand for content and register respectively):




(car (cons 1 2)) == 1

(cdr (cons 1 2)) == 2


The StartWith operator is none other than Cons in reverse (sometimes jokingly referred to as “Snoc” by functional programmers):




public static IEnumerable<TSource> StartWith<TSource>(this IEnumerable<TSource> source, params TSource[] first);
public static IEnumerable<TSource> StartWith<TSource>(this IEnumerable<TSource> source, TSource first);


Focus on the second one first. See how the “first” parameter is taken in as the second argument to StartWith. The reason is it’d be very invasive to put the extension method this parameter on the “first” parameter, as it would pollute all types in the framework with a “Cons” method:




public static IEnumerable<TSource> Cons<TSource>(this TSource head, IEnumerable<TSource> tail);


So, StartWith has to be read in reverse as illustrated below:




EnumerableEx.StartWith(
EnumerableEx.StartWith(
EnumerableEx.Return(3),
2
),
1
).Run(Console.WriteLine);


This prints 1, 2, 3 since 2 is put in front of 3 and 1 in front of that { 2, 3 } result. An overload exists to start a sequence with multiple elements in front of it:




EnumerableEx.StartWith(
EnumerableEx.Return(3),
1, 2
).Run(Console.WriteLine);

image



 


Generate is your new anamorphism



Generate is the most general constructor function for sequences you can imagine. It’s the dual of Aggregate in various ways. Where Aggregate folds a sequence into a single object by combining elements in the input sequence onto a final value in a step-by-step way, the Generate function unfolds a sequence out of a generator function also in a step-by-step way. To set the scene, let’s show the power of Aggregate by refreshing its signature and showing how to implement a bunch of other LINQ combinators in terms of it:




public static TResult Aggregate<TSource, TAccumulate, TResult>(this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, TAccumulate> func, Func<TAccumulate, TResult> resultSelector);


Given a seed value and a function to combine an element of the input sequence with the current accumulator value into a new accumulator value, the Aggregate function can produce a result that’s the result of (left-)folding all elements in the sequence one-by-one. For example, a sum is nothing but a left-fold thanks to left associativity of the numerical addition operation:




1 + 2 + 3 + 4 + 5 = ((((1 + 2) + 3) + 4) + 5)


The accumulated value is the running sum of everything to the left of the current element. Seeing the elements of a sequence being eaten one-by-one is quite a shocking catastrophic event for the sequence, hence the name catamorphism. Below are implementations of Sum, Product, Min, Max, FirstOrDefault, LastOrDefault, Any and All:




var src = Enumerable.Range(1, 10);

Console.WriteLine("Sum = " + src.Aggregate(0, (sum, i) => sum + i));
Console.WriteLine("Prd = " + src.Aggregate(1, (prd, i) => prd * i));
Console.WriteLine("Min = " + src.Aggregate(int.MaxValue, (min, i) => i < min ? i : min));
Console.WriteLine("Max = " + src.Aggregate(int.MinValue, (max, i) => i > max ? i : max));
Console.WriteLine("Fst = " + src.Aggregate((int?)null, (fst, i) => fst == null ? i : fst));
Console.WriteLine("Lst = " + src.Aggregate((int?)null, (lst, i) => i));
Console.WriteLine("AlE = " + src.Aggregate(true, (all, i) => all && i % 2 == 0));
Console.WriteLine("AnE = " + src.Aggregate(false, (any, i) => any || i % 2 == 0));


As the dual to catamorphisms we find anamorphisms, where one starts from an initial state and generates elements for the resulting sequence. I leave it to the reader to draw parallels with others words starting with ana- (from the Greek “up”). The most elaborate signature of Generate is shown below:




public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, bool> condition, Func<TState, IEnumerable<TResult>> resultSelector, Func<TState, TState> iterate);


To see this is the dual to Aggregate, you got to use a bit of fantasy, but you can see the parallels. Where Aggregate takes in an IEnumerable<TSource> and produces a TResult, the Generate function produces an IEnumerable<TResult> from a given TState (and a bunch of other things). On both sides, there’s room for an initial state and a way to make progress (“func” versus “iterate”) both staying in their respective domains for the accumulation type (TAccumulate and TState). To select the result (that will end up in the output sequence), the overload above allows to produce multiple TResult values to be returned per TState. And finally, there’s a stop condition which is implicit in the case of a catamorphism as the “remaining tail of sequence is empty” condition can be used for it (i.e. MoveNext returns false).



Another way to look at Generate is to draw the parallel with a for loop’s three parts: initialization, termination condition, update. In fact, Generate is implemented as some for-loops. More signatures exist:




public static IEnumerable<TValue> Generate<TValue>(this Func<Notification<TValue>> function);
public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, IEnumerable<TResult>> resultSelector, Func<TState, TState> iterate);
public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, Notification<TResult>> resultSelector, Func<TState, TState> iterate);
public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, bool> condition, Func<TState, IEnumerable<TResult>> resultSelector, Func<TState, TState> iterate);
public static IEnumerable<TResult> Generate<TState, TResult>(TState initial, Func<TState, bool> condition, Func<TState, TResult> resultSelector, Func<TState, TState> iterate);


We’ll discuss the ones with Notification<T> types in the next episode titled “Code = Data”, but the remaining three others are all straightforward to understand. Some lack a terminating condition while others lack the ability to yield multiple results per intermediate state. Below is a sample of Generate to produce the same results as Enumerable.Range:




Func<int, int, IEnumerable<int>> range = (start, count) => EnumerableEx.Generate(0, i => i < count, i => i + start, i => i + 1);


The other constructors we’ve seen can be written in terms of Generate as well:




Func<IEnumerable<int>> empty = () => EnumerableEx.Generate<object, int>(null, o => false, o => null, o => o);
Func<int, IEnumerable<int>> @return = i => EnumerableEx.Generate<int, int>(0, n => n < 1, o => new [] { i }, n => n + 1);
Func<Exception, IEnumerable<int>> @throw = ex => EnumerableEx.Generate<object, int>(null, o => true, o => { throw ex; return null; }, o => o);
Func<int, IEnumerable<int>, IEnumerable<int>> cons = (a, d) => EnumerableEx.Generate<int, int>(0, n => n < 2, o => o == 0 ? new [] { a } : d, n => n + 1);

@return(1).Run(Console.WriteLine);
@throw(new Exception()).Catch((Exception ex) => @return(22)).Run(Console.WriteLine);
cons(1, cons(2, cons(3, empty()))).Run(Console.WriteLine);

 


Defer what you can do now till later



The intrinsic lazy nature of sequences with regards to enumeration allows us to push more delayed effects into the sequence’s iteration code. In particular, the construction of a sequence can be hidden behind a sequence of the same type. Let’s show a signature to make this more clear:




public static IEnumerable<TSource> Defer<TSource>(Func<IEnumerable<TSource>> enumerableFactory);

In here, an IEnumerable<TSource> is created out of a factory function. What’s handed back from the call to Defer is a stub IEnumerable<TSource> that will only call its factory function (getting the real intended result sequence) upon a triggered enumeration. An example is shown below:




var xs = EnumerableEx.Defer(() =>
{
Console.WriteLine("Factory!");
return EnumerableEx.Return(1);
});

Console.ReadLine();

xs.Run(Console.WriteLine);
xs.Run(Console.WriteLine);

In here, the Factory message won’t be printed till something starts enumerating the xs sequence. Both calls to Run do so, meaning the factory will be called twice (and could in fact return a different sequence each time).



image



 


Next on More LINQ



More duality, this time between “code and data” views on sequences, introducing Notification<T>.

Sunday, December 27, 2009  |  From B# .NET Blog

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about exception handling operators provided by EnumerableEx:

image

 

Iterating with and without exceptions

Under regular circumstances, one expects sequences to produce data in response to iteration. However, it’s perfectly possibly for an iterator (or any enumerable object) to throw an exception in response to a MoveNext call. For example:

Enumerable.Range(0, 10)
.Reverse()
.Select(x => 100 / x)
.Run(Console.WriteLine);

This piece of code produces the following output:




11

12


14


16


20


25


33


50


100



Unhandled Exception: System.DivideByZeroException: Attempted to divide by zero.

   at Demo.Program.<Main>b__0(Int32 x) in Program.cs:line 15


   at System.Linq.Enumerable.<>c__DisplayClass12`3.<CombineSelectors>b__11(TSource x)


   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()


   at System.Linq.EnumerableEx.Run[TSource](IEnumerable`1 source)


   at Demo.Program.Main(String[] args) in Program.cs:line 13


Only when the Select operator’s iterator hits 0 for its input, its projection function will throw a DivideByZeroException, causing the iterator to come to an abrupt stop as seen above. In the connected world, where iterators may reach out to external services that can signal error conditions, the ability to handle such sequences in a better and composable way becomes increasingly important.



In this post, we’ll have a look at the exception handling primitives for enumerable sequences provided by Rx in System.Interactive.EnumerableEx. A related constructor operator, Throw, will be discussed later but is simply enough to reveal in this context because of its relevance:




var oops = EnumerableEx.Throw<int>(new Exception("Oops"));
oops.Run();


The Throw operator simply creates an iterator that throws the specified exception upon the first MoveNext call on its enumerator. It’s the counterpart to the Return operator creating a single-element iterator. Logically both correspond to the OnNext and OnError methods of IObserver<T> in the reactive world. In addition, we’ll see the relation between those operators and Notification<T> later on, when covering “Code = Data” discussing the Materialize and Dematerialize operators.



 


Catch it and move on



First on is the Catch operator which is available with the following signatures:




public static IEnumerable<TSource> Catch<TSource>(IEnumerable<IEnumerable<TSource>> sources);
public static IEnumerable<TSource> Catch<TSource, TException>(this IEnumerable<TSource> source, Func<TException, IEnumerable<TSource>> handler) where TException : Exception;


The second overload is the one used directly for exception handling as you’re used to it in your favorite imperative language. While you normally associate a handler code block with a “protected code block”, here a handler consists of a function producing a sequence in response to an exceptional iteration over the corresponding “protected sequence”. A sample will make things clearer. Consider the following iterator:




static IEnumerable<int> CouldThrow()
{
yield return 1;
yield return 2;
throw new InvalidOperationException("Oops!");
}

Assume you can’t handle the exceptional condition from the inside and you got the iterator from somewhere else, so the following is impossible to achieve:




static IEnumerable<int> CouldThrow()
{
try
{
yield return 1;
yield return 2;
throw new InvalidOperationException("Oops!");
}
catch (InvalidOperationException)
{
yield return 3;
yield return 4;
yield return 5;
}
}

In fact, the above is invalid C# since you can’t yield from a try-block that’s associated with a catch clause, and neither can you yield from a catch clause. Either way, this illustrates basically what we want to achieve from a conceptual point of view, but on the consuming side of the iterator. This is what Catch allows us to do, as follows:




CouldThrow()
.Catch((InvalidOperationException ex) => new[] { 3, 4, 5 })
.Run(Console.WriteLine);

This simply prints the numbers 1 through 5 on the screen, where the last three values originate from the exception handler. Obviously one could inspect the exception object in the handler. Just like with regular block-based exception handling constructs, one can have multiple “nested” catch clauses associated with the same source sequence. This is achieved by simply chaining Catch operator calls:




new [] {
/* yield return */ 1,
/* yield return */ 2 }.Concat(
/* throw */ EnumerableEx.Throw<int>(new InvalidOperationException("Oops!")))
.Catch((InvalidOperationException ex) => new [] {
/* yield return */ 3,
/* yield return */ 4 }.Concat(
/* throw */ EnumerableEx.Throw<int>(new FormatException("Aargh!"))))
.Catch((FormatException ex) => new [] {
/* yield return */ 5 })
.Run(Console.WriteLine);


Here, the first catch clause throws an exception by itself, being caught by the next catch clause. This is completely similar to regular exception handling. In summary, the Catch operator allows iteration of a sequence to continue with another one when an exception occurs during the first’s iteration. The handler function provided to Catch isn’t evaluated till an exception occurs, so if the resulting sequence isn’t iterated far enough for an exception to be triggered, the handler obviously won’t execute.



The second overload of Catch allows specifying a sequence of sequences (IEnumerable<IEnumerable<T>>), continuing a sequence that has terminated by an exception with the sequence following it. For example:




var ex = EnumerableEx.Throw<int>(new Exception());
EnumerableEx.Catch(new[]
{
new [] { 1, 2 }.Concat(ex),
new [] { 3, 4 }.Concat(ex),
new [] { 5 },
new [] { 6 },
}).Run(Console.WriteLine);


This again will print the numbers 1 through 5, but not 6. Reason is that the first sequence blew up after yielding 1 and 2, causing the next sequence yielding 3 and 4 to be looped in, again causing an exception followed by a hand-over to the third sequence yielding 5. This third sequence finishes regularly (as opposed to exceptionally), so the story ends. I leave it to the reader to write down the corresponding block-structured nested try-catch statements this corresponds to from a conceptual angle.



Exercise: how would you implement a rethrow operation?



image



 


Finally, too



Now we’ve seen the Catch operator, Finally should come as no surprise. From the signature alone, we can see what it does:




public static IEnumerable<TSource> Finally<TSource>(this IEnumerable<TSource> source, Action finallyAction);


Under whatever terminating circumstance when enumerating over the source, the finallyAction will be executed. Obviously this can be illustrated using two cases, one for the regular case and one for the exceptional case. For the latter, we use EnumerableEx.Throw again. First, the regular case:




/* try { */ new [] {
/* yield return */ 1,
/* yield return */ 2 }
.Finally(() =>
Console.WriteLine("Finally"))
.Run(Console.WriteLine);


This will print 1 and 2, followed by the Finally message. In case of an exception, let’s show the similarity to the lexical nesting of exception handler blocks in C#:




/* try { */
/* try { */
new[] {
/* yield return */ 1,
/* yield return */ 2 }.Concat(
/* throw */ EnumerableEx.Throw<int>(new Exception()))
.Finally(() =>
Console.WriteLine("Finally"))
.Catch((Exception ex) => new[] {
/* yield return */ 3,
/* yield return */ 4,
/* yield return */ 5 })
.Run(Console.WriteLine);

Here the innermost enumerable yields 1 and 2, followed by the throwing of an exception. The Finally operator ensures the printing action is executed no matter how this sequence terminates. In this case, the exception will be caught downstream by the Catch operator, so the end result on the screen will be 1, 2, Finally, 3, 4, 5. As a simple exercise, think about what the following code will and should print:




/* try { */
/* try { */
new[] {
/* yield return */ 1,
/* yield return */ 2 }.Concat(
/* throw */ EnumerableEx.Throw<int>(new Exception()))
.Finally(() =>
Console.WriteLine("Finally"))
.Catch((Exception ex) => new[] {
/* yield return */ 3,
/* yield return */ 4,
/* yield return */ 5 })
.Take(2)
.Run(Console.WriteLine);


image 



(Note: break happens when a consumer stops iterating over the resulting sequence.)



 


OnErrorResumeNext as in VB



Visual Basic fans will recognize this operator without doubt. Its operation is fairly straightforward: given a sequence of sequences, those are enumerated one by one, yielding their result to the caller. This is pretty much the same as the Concat operator we’ll see when talking about combinators, with the main difference being that an exceptional termination of any of the sequences does not bubble up. Instead, the OnErrorResumeNext operator simply moves on to the next sequence it can “yield foreach”. A sample will make this clear, but first the signatures:




public static IEnumerable<TSource> OnErrorResumeNext<TSource>(params IEnumerable<TSource>[] sources);
public static IEnumerable<TSource> OnErrorResumeNext<TSource>(this IEnumerable<IEnumerable<TSource>> sources);
public static IEnumerable<TSource> OnErrorResumeNext<TSource>(this IEnumerable<TSource> source, IEnumerable<TSource> next);


The following sample prints numbers 1 through 9, with no exception surfacing, even though the third sequence did terminate exceptionally. Replacing the OnErrorResumeNext call with the use of the Concat operator would surface that exception, terminating the resulting sequence after 1 through 7 have been yielded:




EnumerableEx.OnErrorResumeNext(
new [] { 1, 2 },
new [] { 3, 4, 5 },
new [] { 6, 7 }.Concat(EnumerableEx.Throw<int>(new Exception())),
new [] { 8, 9 }
).Run(Console.WriteLine);


Use of this operator can be useful for batch processing of records where an exceptional return is tolerable.



image





 


Using resources



Just like C#’s and VB’s using statements are related to exceptions due to their “finally”-alike guarantees for cleanup, System.Interactive’s Using operator is used for proper resource cleanup, this time in the face of delayed execution of a sequence. The signature for Using is as follows:




public static IEnumerable<TSource> Using<TSource>(Func<IDisposable> resourceSelector, Func<IDisposable, IEnumerable<TSource>> resourceUsage);


The idea is to create a sequence that acquires a resource when its iteration is started (by running resourceSelector), which is subsequently used to provide a data sequence “using the resource” (obtained through resourceUsage). It’s only when the resulting sequence terminates (exceptionally or regularly) that the resource is disposed by calling its Dispose method. To illustrate this, let’s cook up our own Action-based disposable:




class ActionDisposable : IDisposable
{
private Action _a;

public ActionDisposable(Action a)
{
_a = a;
}

public void Dispose()
{
_a();
}
}

Now we can write the following two samples:




EnumerableEx.Using<int>(() => new ActionDisposable(() => Console.WriteLine("Gone")), a =>
{
// Now we could be using a to get data back...
Console.WriteLine(a is ActionDisposable);
// ... but let's just return some stock data.
return new[] { 1, 2, 3 };
})
.Run(Console.WriteLine);

EnumerableEx.Using<int>(() => new ActionDisposable(() => Console.WriteLine("Gone")), a =>
{
// Now we could be using a to get data back...
Console.WriteLine(a is ActionDisposable);
// ... which may result in an exception.
return new[] { 1, 2 }.Concat(EnumerableEx.Throw<int>(new Exception()));
})
.Catch((Exception ex) => new [] { 4, 5, 6 })
.Run(Console.WriteLine);

The first one will nicely obtain the Gone-printing resource when enumeration is triggered by Run, returning values 1, 2 and 3, before Using calls dispose on the resource, causing it to print “Gone”. In the second example, the results produced under the acquired resource scope trigger an exception, so upon leaving Using the resource will be disposed again (printing “Gone”), putting us in the Catch operator’s body as we saw before. Now the output will be 1, 2, Gone, 4, 5, 6. Again, as an exercise, think about the following one (easy, just stressing the point…):




EnumerableEx.Using<int>(() => new ActionDisposable(() => Console.WriteLine("Gone")), a =>
{
// Now we could be using a to get data back...
Console.WriteLine(a is ActionDisposable);
// ... but let's just return some stock data.
return new[] { 1, 2, 3 };
})
.Take(2)
.Run(Console.WriteLine);


image







(Note: break is caused by the consumer’s termination of iteration over the resulting sequence.)



 


Retry till you succeed



A final operator in the exception handling operators category we’re discussing in this post, is Retry. The idea of Retry is to retry enumerating and yielding a sequence till it terminates successfully:




public static IEnumerable<TValue> Retry<TValue>(this IEnumerable<TValue> source);
public static IEnumerable<TValue> Retry<TValue>(this IEnumerable<TValue> source, int retryCount);


Obviously, Retry has no effect if the source sequence iterates without an exception being triggered:




// A no-op.
new [] { 1, 2, 3 }
.Retry()
.Run(Console.WriteLine);

On the other hand, if an exception occurs, a new enumerator over the source sequence is obtained (using GetEnumerator) and iteration is retried. If the exception condition is persistent, this may cause infinite retry:




// Will go forever...
new [] { 1, 2, 3 }.Concat(EnumerableEx.Throw<int>(new Exception()))
.Retry()
.Run(Console.WriteLine);

The overload taking a retryCount can be used to cap the number of retries. If the exception condition is dependent on dynamic factors (e.g. network connectivity to a stream of data), use of Retry will eventually make the iteration succeed:




static int s_count = 0;



static IEnumerable<int> MayGetNumbers()
{
try
{
yield return 4;
if (s_count == 0)
throw new Exception();
yield return 5;
if (s_count == 1)
throw new Exception();
yield return 6;
}
finally
{
s_count++;
}
}


The iterator above will make a bit more progress every time it’s called, the first time getting stuck after yielding 4, the second time after yielding 4 and 5, and finally succeed to yield 4, 5 and 6. Using Retry on this one will produce the following result:




// 4, (!), 4, 5, (!), 4, 5, 6
MayGetNumbers()
.Retry()
.Run(Console.WriteLine);


I’ll leave it as an exercise to the reader to come up with a diagram for this operator, introducing a distinction between IEnumerable and IEnumerator, the latter being potentially different for every time the GetEnumerator method is called. It’s because of the potential different enumeration results that Retry has a chance to be effective.



 


Next on More LINQ



Constructor operators, producing (sometimes trivial) sequences.

Saturday, December 26, 2009  |  From B# .NET Blog

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about the imperative style operators provided on EnumerableEx:

image

 

Laziness and side-effecting iterators

LINQ can be quite deceptive on a first encounter due to the lazy island it provides in an otherwise eagerly evaluated language like C# and Visual Basic. Simply writing down a query doesn’t cause it to be executed, assuming no eager operators like ToArray, ToList or ToDictionary are used. In fact, the composition of sequences lies at the heart of this since sequences can evaluate lazily, on demand when calling MoveNext on an enumerator. Iterators are a simple means to provide such a sequence, potentially capturing a sea of side-effects interleaved with the act of producing (or “yielding”) values.

Let’s start with a quite subtle kind of side-effect, reading from a random number generator:

static Random s_random = new Random();

static IEnumerable<int> GetRandomNumbers(int maxValue)
{
while (true)
{
yield return s_random.Next(maxValue);
}
}


Every time you execute this, you’ll get to see different numbers. What’s more important in this context though is the fact every yield return point in the code is a place where the iterator suspends till the next call to MoveNext occurs, causing it to run till the next yield return is encountered. In other words, the whole loop is immunized till a consumer comes along. To visualize this a bit more, let’s add some Console.WriteLine output calls as an additional side-effect:




static Random s_random = new Random();

static IEnumerable<int> GetRandomNumbers(int maxValue)
{
while (true)
{
Console.WriteLine("Next");
yield return s_random.Next(maxValue);
}
}

The following code fragment illustrates the point in time where the sequence executes:




var res = GetRandomNumbers(100).Take(10);
Console.WriteLine("Before iteration");
foreach (var x in res)
Console.WriteLine(x);


The result is the following:




Before iteration

Next


16


Next


56


Next


46


Next


58


Next


22


Next


91


Next


77


Next


20


Next


91


Next


92


 


Run, run, run



System.Interactive’s Run operator in EnumerableEx allows execution of the sequence on the spot, in a fashion equivalent to having a foreach-loop. Two overloads exist, one discarding the element consumed from the sequence and another one feeding it in to an Action<T>:




public static void Run<TSource>(this IEnumerable<TSource> source);
public static void Run<TSource>(this IEnumerable<TSource> source, Action<TSource> action);


Rewriting the code above using the second overload will produce similar results:




var res = GetRandomNumbers(100).Take(10);
Console.WriteLine("Before iteration");
res.Run(x => Console.WriteLine(x)); // equivalent to res.Run(Console.WriteLine);


Since Run returns a void, it’s only used for its side-effects, which can be useful from time to time. Previously, a similar affect could be achieved by calling ToArray or ToList, at the cost of burning memory for no good reason. In the above, it wouldn’t even be a viable option in case you simply want to print random numbers ad infinitum, as an infinite sequence would cause the system to run out of memory in a ToArray or ToList context.



Let’s assume for the continuation of this post that GetRandomNumbers doesn’t exhibit a printing side-effect in and of itself:




static IEnumerable<int> GetRandomNumbers(int maxValue)
{
while (true)
{
yield return s_random.Next(maxValue);
}
}

In this setting, our Run call above effectively adds the side-effect of printing to the screen “from the outside”, at the (consuming) end of the “query”. Using the Do operator, one can inject a side-effect in a lazily evaluated sequence composed of different combinators.



image



 


Adding side-effects using Do



The Do method has the following signature:




public static IEnumerable<TSource> Do<TSource>(this IEnumerable<TSource> source, Action<TSource> action);


Taking in an IEnumerable<T> and producing one, it simply iterates over the source, executing the specified action before yielding the result to the consumer. Other than producing the side-effect during iteration, it doesn’t touch the sequence at all. You can write this operator in a straightforward manner yourself:




static IEnumerable<T> Do<T>(this IEnumerable<T> source, Action<T> action)
{
foreach (var item in source)
{
action(item);
yield return item;
}
}

Or you could build it out of other combinator primitives, in particular Select:




static IEnumerable<T> Do<T>(this IEnumerable<T> source, Action<T> action)
{
return source.Select(item =>
{
action(item);
return item;
});
}

This is useful primarily for debugging purposes, where you want to “probe” different points of execution in a query. For example, consider the following query expression:




var res = from x in GetRandomNumbers(100).Take(10)
where x % 2 == 0
orderby x
select x + 1;
res.Run(x => Console.WriteLine(x));


Don’t know why it produces the results you’re seeing? Using Do, you can inject “checkpoints”. First, realize the above query desugars into:




var res = GetRandomNumbers(100).Take(10)
.Where(x => x % 2 == 0)
.OrderBy(x => x)
.Select(x => x + 1);

Now we can put Do calls “on the dots” to see the values flowing through the pipeline during consumption of the query result.




var res = GetRandomNumbers(100).Take(10)
.Do(x => Console.WriteLine("Source -> {0}", x))
.Where(x => x % 2 == 0)
.Do(x => Console.WriteLine("Where -> {0}", x))
.OrderBy(x => x)
.Do(x => Console.WriteLine("OrderBy -> {0}", x))
.Select(x => x + 1)
.Do(x => Console.WriteLine("Select -> {0}", x));


The below shows what’s triggered by the call to Run:




Source  -> 96

Where   -> 96


Source  -> 25


Source  -> 8


Where   -> 8


Source  -> 79


Source  -> 25


Source  -> 3


Source  -> 36


Where   -> 36


Source  -> 51


Source  -> 53


Source  -> 81


OrderBy -> 8


Select  -> 9


9


OrderBy -> 36


Select  -> 37


37


OrderBy -> 96


Select  -> 97


97


For example, 25 produced by the source didn’t survive the Where operator filtering. From the output one can also see that all Where and Source consumption calls precede any OrderBy calls, since the ordering operator eagerly drains its source before carrying out the ordering and passing the results to its consumer.



Looking at the output before the first result, 9, is printed, you can observe the effect of the first MoveNext call on the resulting sequence: the whole source is consulted and fed through the Where operator in order for OrderBy to produce the first (smallest) result. A conceptual diagram illustrating the interception of sequences using Do is shown below:




image


In fact, one can make Do surface through query syntax as well, by providing an extension method overload for e.g. Where (note: this is purely for illustration purposes, and admittedly over-overloading and misusing existing operators :-)):




public static class DoEnumerable
{
public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Action<T> action)
{
return source.Do(action);
}
}


The resulting usage pattern is the following:




var res = from x in GetRandomNumbers(100).Take(10)
/*do*/ where Console.WriteLine("Source -> {0}", x)
where x % 2 == 0
/*do*/ where Console.WriteLine("Where -> {0}", x)
orderby x
/*do*/ where Console.WriteLine("OrderBy -> {0}", x)
select x + 1 into x
/*do*/ where Console.WriteLine("Select -> {0}", x)
select x;


image



 


A lame semi-cooperative scheduler



Let’s first say there’s no good justification (this is the lame part) for doing this sample other than for educational purposes showing use of a sequence purely for its side-effects. The idea of the below is to declare a worker thread with varying priorities for portions of its code. Sure, we could have set thread priorities directly in the code, but the special part of it is feeding back desired priorities to the driver loop (“Start”) of the scheduler that can decide how to implement this prioritization scheme. The cooperative nature is the fact the worker threads yield their run by signaling a new priority, effectively handing over control to the driver loop. I’m calling it semi just because of the following sample implementation relying on preemptive scheduling as provided by the Thread class, though the reader challenge will be to shake off that part.



First of all, work is declared by an iterator that yields priorities followed by the work that will run under that priority. The driver can decide whether or not to call MoveNext, effectively causing the iterator to proceed till the next yield return statement. For example:




static IEnumerable<ThreadPriority> Work1()
{
int i = 0;
Action print = () =>
{
Console.WriteLine("{0} @ {1} -> {2}", Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.Priority, i++);
for (int j = 0; j < 10000000; j++)
;
};
yield return ThreadPriority.Normal;
{
print();
}
yield return ThreadPriority.Lowest;
{
print();
}
yield return ThreadPriority.Normal;
{
print();
}
yield return ThreadPriority.Highest;
{
print();
}
yield return ThreadPriority.Highest;
{
print();
}
}

The block-based work item declaration after a yield syntactically groups work items and their priorities. Obviously we fake work to illustrate the point. A driver loop, called Start, can be implemented as lame as relying on the managed Thread type:




static void Start(IEnumerable<ThreadPriority> work)
{
new Thread(() =>
{
work.Do(p => Thread.CurrentThread.Priority = p).Run();
}).Start();
}


In here, we’re using both Run and Do to respectively run the work and cause the side-effect of adjusting the priority of the thread hosting the work. The reader is invited to cook their own dispatcher with the following signature:




static void Start(params IEnumerable<ThreadPriority>[] workers);


The idea of this one will be to implement a prioritization scheme – just for fun and definitely no profit other than intellectual stimulus – by hand: run all the work on the same thread, with MoveNext calls standing for an uninterruptible quantum. During a MoveNext call, the worker will proceed till the next yield return is encountered, so you may cause an unfair worker to run away and do work forever. This pinpoints the very nature of cooperative scheduling: you need trust in the individual workers. But when you regain control, retrieving the priority for the next work item the worker plans to do, you can make a decision whether you let it go for another quantum (by calling MoveNext) or let another worker from the worker list take a turn (tip: use an ordering operator to select the next worker to get a chance to run). This process continues till all workers have no more work items left, indicated by MoveNext returning false (tip: keep a list of “schedulable” items).



In the scope of this post, the sole reason I showed this sample is because of the use of Do and Run to drive home the point of those operators. Sure, you can achieve the same result (if desired at all) by tweaking the managed thread priority directly in each worker.



 


Next on More LINQ



Dealing with exceptions caused by sequence iteration.

Friday, December 25, 2009  |  From B# .NET Blog

With the recent release of the Reactive Extensions for .NET (Rx) on DevLabs, you’ll hear quite a bit about reactive programming, based on the IObservable<T> and IObserver<T> interfaces. A great amount of resources is available on Channel 9. In this series, I’ll focus on the dual of the System.Reactive assembly, which is System.Interactive, providing a bunch of extensions to the LINQ Standard Query Operators for IEnumerable<T>. In today’s installment we’ll talk about getting started with System.Interactive, also touching briefly on the deep duality.

 

Where to get it?

To get the Reactive Extensions, which include System.Interactive, visit the landing page on DevLabs over here. Downloads are available for .NET Framework 3.5 SP1, .NET Framework 4.0 Beta 2 and Silverlight 3. In this series, I’ll be using the “desktop CLR” distributions from Visual Studio 2008 and Visual Studio 2010.

The differences between the various distributions are of a technical nature and have to do with backporting certain essentials Rx relies on, to the .NET Framework 3.5 SP1 stack. For instance, the IObservable<T> and IObserver<T> interfaces exist in .NET 4.0 but don’t in .NET 3.5. Similarly, the Task Parallel Library (TPL) is available in .NET 4.0’s System.Threading namespace, while Rx redistributes it to run on .NET 3.5 SP1.

 

What’s in it?

Once you’ve installed, have a look at your Program Files (x86) folder, under Microsoft Reactive Extensions. I’m using the “DesktopV2” version here, which refers to CLR 2.0 and the .NET Framework 3.5 SP1 package. The main difference with the “DesktopV4” version is the presence of System.Threading, which contains the Parallel Extensions that ship in .NET 4.0:

image

A brief introduction to the remaining assemblies:

  • System.CoreEx.dll contains some commonly used types like Action and Func delegates with bigger arities (up to 16 parameters), new Property<T> primitives, a Unit type, an Event type wrapping “object sender, EventArgs e” pairs, a Notification<T> (which will be discussed extensively) and some notions of time in the form of TimeInterval<T> and Timestamped<T>.
  • System.Interactive.dll, the subject of this new series, contains extension methods for IEnumerable<T> and additional LINQ to Objects operators, provided in a type called EnumerableEx.
  • System.Reactive.dll, which is where Rx gets its name for and which will be discussed in future series, is the home for reactive programming tools. It contains IObservable<T> and IObserver<T>, as well as various combinators over it (sometimes referred to as “LINQ to Events”). In addition, it provides primitives like subjects and contains a join library (more about this in a separate installment).

 

Duality? Help!

As we like to use expensive words like “mathematical dual” it makes sense to provide some easy to grasp introduction to the subject. The first thing to look at is the distinction between interactive and reactive programming. In the diagram below, this is illustrated:

image

In the world of interactive programming, the application asks for more information. It pulls data out of a sequence that represents some data source, in particular by calling MoveNext on an enumerator object. The application is quite active in the data retrieval process: besides getting an enumerator (by calling GetEnumerator on an enumerable), it also decides about the pace of the retrieval by calling MoveNext at its own convenience.

In the world of reactive programming, the application is told about more information. Data is pushed to it from a data source by getting called on the OnNext method of an observer object. The application is quite passive in the data retrieval process: apart from subscribing to an observable source, it can’t do anything but reacting to the data pushed to it by means of OnNext calls.

The nice thing about those two worlds is that they’re dual. The highlighted words in the paragraphs above have dual meanings. Because of this observation, it’s desirable to search for dualities on a more formal and technical level as well. In particular, the interfaces being used here are the exact duals of one another: IEnumerable<T> is to IObservable<T> as IEnumerator<T> is to IObserver<T>. Dualization can be achieved by turning inputs (e.g. method parameters) into output (e.g. return values):

image

Lots of dualities exist in various disciplines, providing for great knowledge transfers between different domains. For example, in formal logic, De Morgan’s law allows converting expressions built from conjunctions into ones built from disjunctions, and vice versa. In electronics, similarities exist between the behavior of capacitors and inductances: know one and how to go back and forth between domains, and you know the other. Fourier calculus provides duals between time and frequency domains.

One thing all those have in common is a way to go back and forth between domains. Such a mechanism exists in the world of System.Reactive and System.Interactive as well. Every observable collection can be turned into an enumerable one and vice versa, using operators called ToEnumerable and ToObservable. To get a feel about how those work, imagine an enumerable collection first. The only thing one can do to retrieve its data is enumerate over it. For all the values received, signal them on the resulting observable’s observer. In the opposite direction, you subscribe on an observable collection to receive the values thrown at you and keep them so that the resulting enumerable can fetch them.

In this series, we’ll not look over the garden wall to the reactive world just yet. Instead, we’ll get our hands dirty in the world of System.Interactive, a logical extension to .NET 3.5’s IEnumerable<T> extension methods, known as the Standard Query Operators.

 

Operators overview

The System.Linq.EnumerableEx static class in System.Interactive contains various (extension) methods that operator on IEnumerable<T> enumerable collections. It should be seen as a logical extension to the System.Linq.Enumerable class in System.Core. In the illustration below I’ve summarize the various categories those new operators fall into. Some could be considered to fall in multiple categories, so take this with a grain of salt. Nevertheless, we’ll look at those big buckets in subsequent posts in this series:

  • Imperative use – provides operators that execute a sequence (Run) and inject side-effecting Actions in a chain of query operator calls (Do), which is handy for debugging.
  • Exceptions – enumeration of sequences can cause exceptions (e.g. if you write an iterator, but also by other means – see later), which may need to be handled somehow. Methods like Catch, Finally, Using, OnErrorResumeNext and Retry provide means to make a sequence resilient in face of exceptions.
  • Constructors – instead of creating an iterator yourself, it’s possible to let the system create a sequence on your behalf, e.g. by providing it a generator function (Generate), by composing sequences and elements (Return, StartWith, Throw), or triggering the call of a deferred constructor function when a client start enumerating (Defer).
  • Code = Data – the triplet of OnNext, OnError and OnComplete seen on IObserver<T> is a very code-centric way of signaling various outcomes of data consumption. An alternative view is to treat those outcomes as pieces of data, called notifications (Notification<T>). Using Materialize and Dematerialize, one can transfer back and forth between those two domains.
  • Combinators – producing sequences out of one or more existing sequences is what combinators generally do. One can repeat a sequence a number of times (Repeat), zip two sequences together (Zip), let two sequences battle to provide a result the fastest (Amb), and more. Those operators are most “in line” with what you already know from System.Linq today.
  • Functional – while the imperative and exception categories acknowledge the possibility for sequence to exhibit side-effects, the functional category is meant to tame the side-effects, typically in one-producer-many-consumer scenarios. When a sequence may produce side-effects during iteration, it may be desirable to avoid duplication of those when multiple consumers iterate.
  • Miscellaneous – just that, miscellaneous.

image

Next time, we’ll start by looking at the “Imperative use” category. Download the libraries today and start exploring!

Sunday, December 06, 2009  |  From B# .NET Blog

The CLR’s exception handling facilities provide for protected blocks (“try”) one can associate a handler with. There are four kinds of handlers, and exactly one can be associated with a protected block (but nesting can be used to associate multiple handlers with a block of code):

  • A finally handler is executed whenever the block is exited, regardless of whether this happened by normal control flow or an unhandled exception. C# exposes this using the finally keyword.
  • A type-filtered handler handles an exception of a specified class or any of its subclasses. Better known as a “catch block”, C# provides this through its catch keyword.
  • A user-filtered handler runs user-specified code to determine whether the exception should be ignored, handled by the associated handler, or passed on to the next protected block. C# doesn’t expose this, but Visual Basic does by means of its When keyword.
  • A fault handler is executed if an exception occurs, but not on completion of normal control flow. Neither C# nor Visual Basic provide a fault handler language feature.

In this reader challenge, we’re going to focus on fault handlers. Due to their lack of language surface, their effect is often mimicked by using some local state to determine whether the protected block exited gracefully or not:

bool success = false;
try
{
    // Do stuff
    success = true;
}
finally
{
   if (!success)
   {
       // There was a fault. Do something special.
   }
   // Fault or not; this is what finally does.
}

If an exception happens during “Do stuff”, we end up in the finally block and come to conclude success was never set to true. This indicates an error happened, and we should handle the fault case. However, this technique can get a bit tricky when there are different paths exiting the try block: one could return from the enclosing method in various places, requiring the “success = true” code to be sprinkled around. This is exactly what exception handling was designed for: reducing clutter in your code that has to do with error condition/code tracking. So, we’re defeating that purpose.

Today’s challenge is to create a true fault handler in C#, just for the sake of it. This is merely a brain teaser, encouraging readers to find out what happens behind the scenes of compiled C# code. We won’t be addressing certain concerns like non-local return (the case I mentioned above) but will be hunting for the true “fault” handler treasure hidden deeply in the C# compiler’s IL code emitter. The operational specification is the following:

var f = Fault(() => Console.WriteLine("Okay"),
              () => Console.WriteLine("Fault"));
f();
Console.WriteLine();

var g = Fault(() => { throw new Exception("Oops"); },
              () => Console.WriteLine("Fault"));
try
{
    g();
}
catch (Exception ex)
{
    Console.WriteLine(ex);
}

The above should produce the following output:

Okay

Fault
System.Exception: Oops
   at Program.<Main>b__2()
   (I won’t reveal the secrets here yet…)
   at Program.Main()

Action f illustrates the non-exceptional case where the fault handler is not invoked (a finally handler would get invoked). Action g illustrates the exceptional case where the fault handler gets invoked and the exception bubbles up to the catch-block surrounding its invocation.

It’s strictly forbidden to use local state in Fault (or a method it calls) to track the successful execution of the protected block. Therefore, the below is an invalid solution:

static Action Fault(Action protectedBlock, Action faultHandler)
{
    return () =>
    {
        bool success = false;
        try
        {
            protectedBlock();
            success = true;
        }
        finally
        {
            if (!success)
                faultHandler();
        }
    };
}

Moreover, execution of your Fault method should really use a fault handler as encountered in IL code. It should be a fault handler, not mimic one. In addition, you should not go for a solution where you write a Fault method in ILASM by hand and link it as a netmodule in a C# project, using al.exe:

.class private FaultClosure
{
  .field class [System.Core]System.Action protectedBlock
  .field class [System.Core]System.Action faultHandler

  .method void .ctor()
  {
    ldarg.0
    call instance void [mscorlib]System.Object::.ctor()
    ret
  }

  .method void Do()
  {
    .try
    {
      ldarg.0
      ldfld class [System.Core]System.Action Program/FaultClosure::protectedBlock
      callvirt instance void [System.Core]System.Action::Invoke()
      leave.s END
    }
    fault
    {
      ldarg.0
      ldfld class [System.Core]System.Action Program/FaultClosure::faultHandler
      callvirt instance void [System.Core]System.Action::Invoke()
      endfault
    }
    END: ret
  }
}

.method static class [System.Core]System.Action Fault(class [System.Core]System.Action protectedBlock, class [System.Core]System.Action faultHandler)
{
  .locals init (class Program/FaultClosure V_0)
  newobj void Program/FaultClosure::.ctor()
  stloc.0
  ldloc.0
  ldarg.0
  stfld class
[System.Core]System.Action Program/FaultClosure::protectedBlock
  ldloc.0
  ldarg.1
  stfld class
[System.Core]System.Action Program/FaultClosure::faultHandler
  ldloc.0
  ldftn instance void Program/FaultClosure::Do()
  newobj void [System.Core]System.Action::.ctor(object, native int)
  ret
}

Again, this exercise is just for fun with no profit other than brain stimulation. Hint: what C# 2.0 or later feature may cause a “fault” block to be emitted (i.e. if you ildasm a compiled valid C# application, you can find a “fault” keyword)?

Happy holidays!

Sunday, November 08, 2009  |  From B# .NET Blog

Introduction

Recursion is a widely known technique to decompose a problem in smaller “instances” of the same problem. For example, performing tree operations (e.g. in the context of data structures, user interfaces, hierarchical stores, XML, etc) can be expressed in terms of a navigation strategy over the tree where one performs the same operation to subtrees. A base case takes care of the algorithm’s “bounding”; in case of tree operations that role is played by the leaf nodes of the tree.

Looking at mathematical definitions, one often finds recursive definitions, as well as more imperative style operations:

Imperative

clip_image002

Recursive

clip_image004

In here, the first definition lends itself nicely for implementation in an imperative language like C#, e.g. using a foreach-loop. Or, in a more declarative and functionally inspired style, one could write this one using LINQ’s Aggregate operator (which really is a catamorphism):

Func<int, int> fac = n => Enumerable.Range(1, n).Aggregate(1, (p, i) => p * i);

It’s left as an exercise to the reader to define all other catamorphic operators in LINQ in terms of the Aggregate operator:

But this is not what we’re here for today. Instead, we’re going to focus on the recursive definition. We all know how to write this down in C#, as follows:

int fac(int n)
{
    return n == 0 ? 1 : n * fac(n – 1);
}

Or, we could go for lambda expressions, requiring a little additional trick to make this work:

Func<int, int> fac = null;
fac = n => n == 0 ? 1 : n * fac(n – 1);

The intermediate assignment with the null literal is required to satisfy the definite assignment rules at the point fac is used in the body of the lambda expression, as indicated in bold. What goes on here is quite interesting. When the compiler sees that the fac local variable is used inside a lambda’s body, it’s hoisted into a closure object. In other words, the local variable is not that local:

var closure = new { fac = (Func<int, int>)null };
closure.fac = n => n == 0 ? 1 : n * closure.fac(n – 1);

Because of the heap-allocated nature of a closure, we can pass it around – including all of its “context” – to other locations in the code, potentially lower on the call stack. Let’s not go there, but focus on the little null-assignment trick we had to play here. Turns out we can eliminate this.

 

Please tell me … Y

Our two-step recursive definition of a lambda expression isn’t too bad, but it should stimulate the reader’s curiosity: can’t we do a one-liner recursive definition instead? The following doesn’t work for reasons alluded to above (try it yourself in your C# compiler):

Func<int, int> fac = n => n == 0 ? 1 : n * fac(n – 1);

In languages like F#, a separate recursive definition variant of “let” exists:

let rec fac n = if n = 0 then 1 else n * fac (n – 1)

An interesting (well, in my opinion at least) question is whether we can do something similar in C#, realizing “anonymous recursion”. What’s anonymous about it? Well, just having a single expression, without any variable assignments, that captures the intent of the recursive function. In other words, I’d like to be able to:

ThisMethodExpectsUnaryFunctionIntToInt(/* I want to pass the factorial function here, defining it inline */)

To do this, in the factorial-function-defining expression we can’t refer to a local variable, as we did in the C# (and the F#) fragment. Yet, we need to be able to refer to the function being defined to realize the recursion. If it ain’t a local variable, and we need to refer to it, it ought to be a parameter to the lambda expression:

fac => n => n == 0 ? 1 : n * fac(n – 1)

Now we can start to think about types here. On the outer level, we have a function that takes in a “fac” and produces a function “n => …”. The latter function, at the inner level, is a function that takes in “n” and produces “n == 0 ? …”. That last part is simple to type: Func<int, int>. Back to the outer layer of the lambda onion, what has to be the type of fac? Well, we’re using fac inside the lambda expression, giving it an int and expecting an int back (see “fac(n – 1)”), hence it needs to be a Func<int, int> as well. Pasting all pieces together, the type of the thing above is:

Func<Func<int, int>, Func<int, int>>

Or, in a full-typed form, the expression looks as follows:

Func<Func<int, int>, Func<int, int>> f = (Func<int, int> fac) => ((int n) => n == 0 ? 1 : n * fac(n - 1))

But the “ThisMethodExpectsUnaryFunctionIntToInt” method expects, well, a Func<int, int>. We somehow need to shake off one of the seemingly redundant Func<int, int> parts of the resulting lambda expression. In fact, we need to fix the lambda expression by eliminating the fac parameter, substituting it for the recursive function itself. So far, we can misuse the function above:

f(n => n * 2)(5) –> 40

The bold part somehow needs to be the factorial function itself. This can be realized by means of a fixpoint combinator. From a typing perspective, it has the following meaning:

Func<int, int> Fix(Func<Func<int, int>, Func<int, int>> f)

In other words, we should be able to:

ThisMethodExpectsUnaryFunctionIntToInt(Fix(fac => n => n == 0 ? 1 : n * fac(n – 1)))

and leave the magic of realizing the recursion to Fix. This method can be define as follows (warning: danger for brain explosion):

Func<T, R> Fix<T, R>(Func<Func<T, R>, Func<T, R>> f)
{
    FuncRec<T, R> fRec = r=> t => f(r(r))(t);
    return fRec(fRec);
}

delegate Func<T, R> FuncRec<T, R>(FuncRec<T, R> f);

To see how the Fix method works, step through it, feeding it our factorial definition. The mechanics of it are less interesting in the context of this blog post, suffice to say it can be done. By the way, this Fix method is inspired by the Y combinator, a fixpoint combinator created by lambda calculus high priests.

 

Oops … there goes my stack :-(

So far, so good. We have a generic fixpoint method called “Fix”, allowing us to define the factorial function (amongst others of course) as follows:

Fix(fac => n => n == 0 ? 1 : n * fac(n – 1))

Since factorial grows incredibly fast, our Int32-based calculation will overflow in no time, so feel free to use .NET 4’s new BigInteger instead:

var factorial = Fix<BigInteger, BigInteger>(fac => n => n == 0 ? 1 : n * fac(n - 1));
factorial(10000);

Either way, let’s see what happens under the debugger:




image


That doesn’t look too good, does it? All the magic that Fix did was to realize the recursion, but we’re still using recursive calls to compute the value. After some 5000 recursive calls, the call stack blew up. Clearly we need to do something if we are to avoid this, whilst staying in the comfort zone of recursive algorithm implementations. One such technique is a trampoline. But before we go there, it’s worthwhile to talk about tail calls.



 


Don’t stand on my tail!



One of the inherent problems with this kind of recursion is the fact we need the result of the recursive call after we return from a recursive call. That seems logical but think about it for a while. When we’re computing factorial of 5, we really are doing this:




fac 5 =

          fac 4 =


                   fac 3 =


                           fac 2 =


                                   fac 1 =


                                   1


                           2 = 2 * 1


                   6 = 3 * 2


          24 = 4 * 6


120 = 5 * 24


What happens here is that after we return from the recursive call, we still have to carry out a multiplication. It’s from this observation it follows that we need a call stack frame to keep track of the computation going on. One way to improve on the situation is by avoiding the need to do computation after a recursive call returns. This can achieved by accumulating the result of recursive calls, effectively carrying the result “forward” till the point we hit the base case. In essence, we’re dragging along the partial computed result on every recursive call. In the case of factorial this accumulator will contain the partial multiplication, starting with a value of 1:




fac 5 1 =

          fac 4 (5 * 1) =


                           fac 3 (4 * 5) = 
                                           fac 2 (3 * 20) =


                                                            fac 1 (2 * 60) =


                                                                             120


In here, the second parameter is the accumulated product so far. In the base case, we simply return the accumulated value. Now we don’t need to do any more work after the recursive call returns. In other words, we’ve eliminated a “tail” of computation after a recursive call. Compilers can come to this insight and eliminate the recursive call. Below is a sample of an accumulating factorial definition in F#:




image


If we compile this code in the F# compiler (instead of just staring at F# interactive) and disassemble it, we get to see exactly this optimization carried out by the compiler:




image


In fact, this code is equivalent to the following piece of C#:




int Fac(int n, int a)

{


    while (n > 1)


    {


        a *= n;


        n—;


    }


    return a;


}


Wonderful, isn’t it? While we preserved a recursive definition, we really got the performance of an imperative loop-construct and are not exhausting the call stack in any way. The C# compiler on the other hand wouldn’t figure this out. In what follows, we will be using this definition of factorial in combination with a trampoline to realize the same kind of stack-friendly recursion in C#.



 


The art of jumping the trampoline



One main characteristics of trampolines is that they bounce back. Jump on them and you’ll be catapulted in the air because you’re given a kinetic energy boost. While in the air you can make funny transformations (corresponding to the body of the recursive function as we shall see), but in the end you’ll end up on the trampoline again. The whole cycle repeats till you run out of energy and just stay at rest on the trampoline. That state will correspond to the end of the recursion.




image


This all may sound very vague but things will become clear in a moment. The core idea of a trampoline is to throw a (recursive) function call on a trampoline, let it compute and have it land on the trampoline with a new function. It’s important to see that both the function and its arguments are jumping on there. Compare it to an acrobat that jumps on the trampoline and counts down every time he bounces. The function is the acrobat, the argument is the counter he maintains. When that counter reaches a base case, the breaks from the bouncing by carefully landing next to the trampoline.



How can we realize such a thing is C# for functions of various arities? To grasp the concept, it helps to start from the simplest case, i.e. an Action with no arguments and – obviously, as we’re talking about actions – no return value. We want to be able to write something like this, but without exhausting the call stack:




void Motivate()

{

    Console.WriteLine(“Go!”);

    Motivate();

}


It goes without saying this can be achieved using a simple loop construct, but it’s no surprise the base case of our investigation is trivial. Keep in mind most of my blog blog posts are about esoteric programming, so don’t ask “Why?” just yet. To realize this recursion, we should start from the signature of a recursive action delegate. To get the trampoline behavior, a recursive action should not just return “void” but return another instance of itself to signal the trampoline what to call next. Compare it with the acrobat again: his capability (a “function” that can be called by the ring master initially: “start jumping!”) to jump up returns the capability (a function again, to be called by the trampoline upon landing) to jump another time. This leads to the following (type-recursive!) signature:




delegate ActionRec ActionRec();

To write an anonymous recursive function, we use the same fixpoint technique as we saw before. In other words, the action is going to be passed as a parameter to a lambda expression, so that it can be called – to realize the recursion – inside the lambda expression’s body. For example, our Motivation sample can be written like this:




Func<ActionRec, Func<ActionRec>> _motivate = motivate => () =>
{
Console.WriteLine("Go!");
return motivate();
};


Read the lambda expression from left to right to grasp it: given an ActionRec (which will be fixed to the while action itself further on by means of a Fix call), we’re tasked with providing something the trampoline can call (with no arguments in our simple case) to run the next “bounce”. This by itself should return an ActionRec to facilitate the further recursion. Apart from the return keyword and some lambda arrow tokens this looks quite similar to the typical recursive C# method shown earlier. To get the real recursive function with a regular Action signature, we’ll call a fixpoint method called Fix:




Action Motivate = _motivate.Fix();

Now can can call Motivate and should see no StackOverflowException even though the function will run forever. The obvious question is how the Fix method works. Since we have no control over the Func delegate type used to define the non-fixed _motivate delegate, it ought to be an extension method. The signature therefore looks like this:




public static Action Fix(this Func<ActionRec, Func<ActionRec>> f)

Now let’s reason about what the Fix method can do. Obviously it has to return an Action, which looks like “return () => { /* TODO */ };”. Question is what the body of the action has to do. Well, it will have to call f at some point, passing in an ActionRec. This returns a function that, when called, will give us another of those ActionRec delegates. As long as a non-null delegate object is returned (null will be used later on as a way to break from the recursion), we can keep calling it in a loop. And that’s where the stack-friendly nature comes from: we realize the recursion using a loop. Here’s how it looks:




public static Action Fix(this Func<ActionRec, Func<ActionRec>> f)
{
return () =>
{
ActionRec a = null;
for (a = () => a; a != null; a = f(a)())
;
};
}

The last part of the for-statement is the most explanatory one: it calls the user-defined function with the recursive action, which returns an ActionRec. That gets called with the arguments, which for a plain vanilla action are empty, (). To get started, we use the definite assignment “closure over self” trick we saw at the very start of the post (starting with null):



Func<int, int> fac = null;


fac = n => n == 0 ? 1 : n * fac(n – 1);

That’s essentially the fixpoint part of Fix. It will definitely help the reader to trace through the code for the Motivate sample step by step. You’ll see how code in the Fix trampoline will get interleaved with calls to your own delegate:




image


The second frame in the callstack is the trampoline that lives in the anonymous action inside the Fix method. We’re currently broken in the debugger inside the recursive call to our own delegate. Notice though the call stack’s depth is constant at two frames (ignoring Main), even though we’ve already made calls. Contrast this to the original C#-style Motive definition, which would already have grown the stack to 10 frames:




image


The way to break from a trampoline-based recursion is by returning null from the trampolined function. While that works, we want to add a bit of syntactical surface to it for reasons that will become apparent later (hint: we’ll need a place to stick return values on). So, we define a trivial Break extension method that will return a null-valued ActionRec:




public static ActionRec Break(this ActionRec a) { return null; }

Based on certain conditions we can now decide to break out of the recursion, simply by calling Break on the ActionRec passed in. For example, we could capture a local variable from the outer scope, to act as a counter:




Console.WriteLine("Action of arity 0");
{
int i = 0;
Func<ActionRec, Func<ActionRec>> f = a => () => { Console.WriteLine("Go! " + i++); return i < 10 ? a() : a.Break(); };
f.Fix()();
}
Console.WriteLine();

This will just print 10 Go! messages. Notice I’ve omitted an intermediate variable for the f.Fix() result and call the resulting Action delegate in one go.



 


More recursive Action types



To do something more useful, we want to support higher arities for recursive Action and Func delegates. Let’s start by looking at the Action delegates since we’ve already looked at the simplest case of a recursive Action delegate before. Below is a sample of a recursive Action delegate with one parameter, printing the powers of two with exponents 0 to 9:




Console.WriteLine("Action of arity 1");
{
int i = 0;
Func<ActionRec<int>, Func<int, ActionRec<int>>> f = a => x => { Console.WriteLine("2^" + i++ + " = " + x); return i < 10 ? a(x * 2) : a.Break(); };
f.Fix()(1);
}
Console.WriteLine();

Notice we’re cheating a bit by using a captured outer local variable to restrict the number of recursive calls. It’s left as an exercise to the reader to define another such recursive function where the input parameter is used to represent the “to” argument, i.e. specifying the largest exponent to calculate a power of two for.



In here, the ActionRec<T> delegate represents a recursive action delegate with one generic argument:




delegate ActionRec<T> ActionRec<T>(T t);

In order to define the recursive action that produces the powers of two, we use a regular function that maps such a recursive action onto a function that can create a new one of those, given an int as the input. Changing the names of the parameters may help to grasp this:




Console.WriteLine("Action of arity 1");
{
int i = 0;
Func<ActionRec<int>, Func<int, ActionRec<int>>> _printPowersOfTwo = printPowersOfTwo => x =>
{
Console.WriteLine("2^" + i++ + " = " + x);
return i < 10 ? printPowersOfTwo(x * 2) : printPowersOfTwo.Break();
};
Action<int> PrintPowersOfTwo = _printPowersOfTwo.Fix();
PrintPowersOfTwo(1);
}
Console.WriteLine();

Now the indented block reads like “void printPowersOfTo(int x) { … }”. The Fix method’s trampoline is a bit more tricky than the one we saw before, as it needs to deal with the one parameter that has to be fed to the called delegate. There’s a bit of voodoo here since the argument can change every time one makes a recursive call. After all, it’s an argument to the delegate. In the sample above, printPowersOfTwo is fed consecutive powers of two. The little hack is shown below:




public static Action<T> Fix<T>(this Func<ActionRec<T>, Func<T, ActionRec<T>>> f)
{
return t =>
{
ActionRec<T> a = null;
for (a = t_ => { t = t_; return a; }; a != null; a = f(a)(t))
;
};
}

Trace through this for the PrintPowersOfTwo sample, where t starts as value 1. Clearly, a is non-null at that point (due to the assigned lambda expression in the initializer of the for-loop), so we get to call f with that action and argument 1. Now we’re in our code where 1 got assigned to parameter x, causing 2^0 = 1 to be printed to the screen. Ultimately this results in a call to printPowersOfTwo with argument 2. This happens on the action delegate “a” created by the first iteration of the trampoline’s for-loop:




a = t_ => { t = t_; return a; }


So, as a side-effect of calling this delegate, the local variable t got assigned the value 2. And the returned object from this call, “a”, gets assigned in the trampoline’s driver loop to the local variable “a”. In the next iteration, 2 will be fed to the recursive delegate. And so on:




image


Increasing the number of arguments with one more is done in a completely similar way:




Console.WriteLine("Action of arity 2");
{
int i = 0;
Func<ActionRec<int, int>, Func<int, int, ActionRec<int, int>>> f = a => (x, y) => { Console.WriteLine("2^" + x + " = " + y); return ++i < 10 ? a(x + 1, y * 2) : a.Break(); };
f.Fix()(0, 1);
}
Console.WriteLine();


Where the new ActionRec delegate takes two generic parameters:




delegate ActionRec<T1, T2> ActionRec<T1, T2>(T1 t1, T2 t2);

In this sample we use two input parameters, on to represent the exponents and one to accumulate the powers of two. The Fix method now has to deal with two input parameters that need to be captured upon recursive calls. This is achieved as follows:




public static Action<T1, T2> Fix<T1, T2>(this Func<ActionRec<T1, T2>, Func<T1, T2, ActionRec<T1, T2>>> f)
{
return (t1, t2) =>
{
ActionRec<T1, T2> a = null;
for (a = (t1_, t2_) => { t1 = t1_; t2 = t2_; return a; }; a != null; a = f(a)(t1, t2))
;
};
}

What we haven’t mentioned over and over again is the definition of the Break method that returns null to signal to break from the recursion. Here they are for completeness:




public static ActionRec<T> Break<T>(this ActionRec<T> a) { return null; }
public static ActionRec<T1, T2> Break<T1, T2>(this ActionRec<T1, T2> a) { return null; }

Below is an insight-providing screenshot illustrating the way recursion happens:




image


In Main, we called the fixed delegate with arguments 0 and 1. This caused us to enter the outermost lambda expression in Fix with t1 and t2 respectively set to 0 and 1. This is the second frame on the call stack (read from the bottom). The for-loop has proceeded to the first call of its update expression, resulting in a call to f with argument a and a subsequent invocation on the result with arguments 0 and 1. As a result, our lambda expression, lexically nested in the Main method, got called as observed by the third frame on the call stack, with x and y respectively set to 0 and 1. Here the recursive call happens by invoking the a delegate with arguments 1 (x + 1) and 2 (y * 2). Finally, this put us back in the trampoline where those two values will be captured in t1 and t2, and that’s where the debugger is currently sitting.



Moving on from here, we’ll back out of the trampoline and return the result of the apparent recursive call on “a” from lambda “f” in Main. This by itself puts us back in the driver for-loop, where “a” will be tested for null (which it isn’t yet) and the whole cycle starts again. This illustrates the key essence of the trampoline: instead of having the user directly causing a recursive call, callbacks to the trampoline code cause it to capture enough state information to make the call later on. This effectively flattens recursive calls into the for-loop. What we lost is the ability to do work after the recursive call returns (something we could work around by getting into the land of continuations).



 


Recursive Func types



The essential tricks to deal with input parameters have been explored above. However, Func delegate types have one more property we haven’t investigated just yet: the ability to return a value. We’ve seen the Break method before, but for Action delegates it doesn’t do anything but returning null. In case of recursive Func types, we’ll have to do something in addition to this, in order to return an object to the caller.



Let’s get started by defining the FuncRec delegate types. Again, those are mirrored after the regular Func delegates, but we have to sacrifice the return type position for a FuncRec:




delegate FuncRec<R> FuncRec<R>();
delegate FuncRec<T, R> FuncRec<T, R>(T t);
delegate FuncRec<T1, T2, R> FuncRec<T1, T2, R>(T1 t1, T2 t2);

Returning from a recursive FuncRec delegate will be done through the Break methods that now will take an argument for the return value:




public static FuncRec<R> Break<R>(this FuncRec<R> a, R res) { _brr[a] = res; return a; }
public static FuncRec<T, R> Break<T, R>(this FuncRec<T, R> a, R res) { _brr[a] = res; return a; }
public static FuncRec<T1, T2, R> Break<T1, T2, R>(this FuncRec<T1, T2, R> a, R res) { _brr[a] = res; return a; }

What’s happening inside those Break methods will be discussed further on. For now, it suffices to see the signatures, taking in an R parameter to hold the return value of the recursive call. Also notice how those methods return “a” instead of null.



Before we dig any deeper in the implementation, let’s see a couple of recursive functions in action:




Console.WriteLine("Function of arity 0");
{
int i = 0;
Func<FuncRec<int>, Func<FuncRec<int>>> f = a => () => { Console.WriteLine("Fun! " + i++); return i < 10 ? a() : a.Break(i); };
Console.WriteLine("Result: " + f.Fix()());
}
Console.WriteLine();

Console.WriteLine("Function of arity 1");
{
int i = 0;
Func<FuncRec<int, int>, Func<int, FuncRec<int, int>>> f = a => x => { Console.WriteLine("2^" + i++ + " = " + x); return i < 10 ? a(x * 2) : a.Break(i); };
Console.WriteLine("Result: " + f.Fix()(1));
}
Console.WriteLine();

Console.WriteLine("Function of arity 2");
{
int i = 0;
Func<FuncRec<int, int, int>, Func<int, int, FuncRec<int, int, int>>> f = a => (x, y) => { Console.WriteLine("2^" + x + " = " + y); return ++i < 10 ? a(x + 1, y * 2) : a.Break(i); };
Console.WriteLine("Result: " + f.Fix()(0, 1));
}
Console.WriteLine();

We bound the recursion again by means of some outer local variable, but this is not a requirement. But in order to show all functions without one running away, such a bound is desirable. Concerning the input parameters, things look identical to the ActionRec samples. What’s different are the Break calls and the output types specified in the FuncRec type parameters. We’ve simply used the bounding variable “i” as the return value for illustration purposes. Later, when we see factorial again, the output value will be more interesting.



How does Fix work this time? Let’s show one sample for the function with one argument:




public static Func<T, R> Fix<T, R>(this Func<FuncRec<T, R>, Func<T, FuncRec<T, R>>> f)
{
return t =>
{
object res_;
FuncRec<T, R> a = null;
for (a = t_ => { t = t_; return a; }; !_brr.TryGetValue(a, out res_); a = f(a)(t))
;
var res = (R)res_;
_brr.Remove(a);
return res;
};
}

I’m using an ugly trick here to store the return value. Have a look at the Break methods that do stick the specified result in a dictionary, which is typed as follows:




// Would really like to store result on a property on the delegate,
// but can't derive from Delegate manually in C#... This is "brr".
private static Dictionary<Delegate, object> _brr = new Dictionary<Delegate, object>();

Break add the return value to this dictionary, while the trampoline driver loop checks for such a value repeatedly. If one is found, a Break call has been done and the loop terminates, stopping the recursion and sending the answer to the caller. Alternative potentially cleaner tricks can be thought of, but I haven’t spent much more time thinking about this.



All in all, the core Fix is pretty much the same as for the action-based delegates, apart from the TryGetValue call in the condition, and some dictionary-related cleanup code. Below is our destination factorial sample:




Console.WriteLine("Factorial");
{
Func<FuncRec<int, int, int>, Func<int, int, FuncRec<int, int, int>>> fac_ = f => (x, a) => x <= 1 ? f.Break(a) : f(x - 1, a * x);
Func<int, int> fac = (int n) => fac_.Fix()(n, 1);
Enumerable.Range(1, 10).Select(n => new { n, fac = fac(n) }).Do(Console.WriteLine).Run();
}
Console.WriteLine();

The type of the intermediate function definition is quite impressive due to the fixpoint structure, but the essence of the function is quite easy to grasp:




f => (x, a) => x <= 1 ? f.Break(a) : f(x - 1, a * x)


Given a function (that will represent the fixed factorial definition, i.e. itself) and two arguments, one to count down and one to represent the accumulated product, we simply continue multiplying till we hit the base case, where we return (using Break) the accumulated value. The next line creates a simple wrapper function to hide away the accumulator base value of 1:




Func<int, int> fac = (int n) => fac_.Fix()(n, 1);


And now we have a simple factorial function we can call in the regular manner we’re used to, using delegate invocation syntax. To illustrate it for multiple values, I'm using a simple LINQ statement, projecting each value from 1 to 10 onto an anonymous object with both that number and the corresponding factorial value. The Do and Run methods will be introduced in the Reactive Framework as new extensions to IEnumerable:




public static IEnumerable<T> Do<T>(this IEnumerable<T> src, Action<T> a)


{


    foreach (var item in src)


    {


        a(item);


        yield return item;


    }


}





public static void Run<T>(this IEnumerable<T> src)


{


    foreach (var _ in src)


        ;


}


To prove the stack utilization remains constant, we can extend the sample using the handy System.Diagnostics.StackTrace class and the .NET 4.0 Tuple class. In the non-trampolined version, we’d see the stack grow on every call, reaching its maximum depth at the point we return from the base case. So, watching the stack depth at the point of the base case’s return call (using Break) will be a good metric of success:




Console.WriteLine("Factorial + stack analysis");
{
Func<FuncRec<int, int, Tuple<int, int>>, Func<int, int, FuncRec<int, int, Tuple<int, int>>>> fac_ =
f => (x, a) => x <= 1 ? f.Break(new Tuple<int,int>(a, new StackTrace().FrameCount)) : f(x - 1, a * x);
Func<int, Tuple<int, int>> fac = (int n) => fac_.Fix()(n, 1);
(from n in Enumerable.Range(1, 10)
let f = fac(n)
select new { n, fac = f.Item1, stack = f.Item2 }).Do(Console.WriteLine).Run();
}
Console.WriteLine();

The result is shown below:




image


This looks good, doesn’t it? If you get tried of the long generic Func types, simply call the Fix method directly, passing in the types of the arguments and return value:




var fac_ = Ext.Fix<int, int, Tuple<int, int>>(f => (x, a) => 
x <= 1 ? f.Break(new Tuple<int, int>(a, new StackTrace().FrameCount)) : f(x - 1, a * x)
);


Beautiful! Almost reads like a regular C# method declaration (with plenty of imagination the author possesses).



 


Putting the pieces together



Since readers often want to try out the thing as a whole, here’s the implementation of my latest Esoteric namespace:




// Trampoline for tail recursive Action and Func delegate creation and invocation in constant stack space
// bartde - 10/29/2009

using System;
using System.Collections.Generic;

namespace Esoteric
{
delegate ActionRec ActionRec();
delegate ActionRec<T> ActionRec<T>(T t);
delegate ActionRec<T1, T2> ActionRec<T1, T2>(T1 t1, T2 t2);

delegate FuncRec<R> FuncRec<R>();
delegate FuncRec<T, R> FuncRec<T, R>(T t);
delegate FuncRec<T1, T2, R> FuncRec<T1, T2, R>(T1 t1, T2 t2);

static class Ext
{
public static ActionRec Break(this ActionRec a) { return null; }
public static ActionRec<T> Break<T>(this ActionRec<T> a) { return null; }
public static ActionRec<T1, T2> Break<T1, T2>(this ActionRec<T1, T2> a) { return null; }

public static Action Fix(this Func<ActionRec, Func<ActionRec>> f)
{
return () =>
{
ActionRec a = null;
for (a = () => a; a != null; a = f(a)())
;
};
}

public static Action<T> Fix<T>(this Func<ActionRec<T>, Func<T, ActionRec<T>>> f)
{
return t =>
{
ActionRec<T> a = null;
for (a = t_ => { t = t_; return a; }; a != null; a = f(a)(t))
;
};
}

public static Action<T1, T2> Fix<T1, T2>(this Func<ActionRec<T1, T2>, Func<T1, T2, ActionRec<T1, T2>>> f)
{
return (t1, t2) =>
{
ActionRec<T1, T2> a = null;
for (a = (t1_, t2_) => { t1 = t1_; t2 = t2_; return a; }; a != null; a = f(a)(t1, t2))
;
};
}

// Would really like to store result on a property on the delegate,
// but can't derive from Delegate manually in C#... This is "brr".
private static Dictionary<Delegate, object> _brr = new Dictionary<Delegate, object>();

public static FuncRec<R> Break<R>(this FuncRec<R> a, R res) { _brr[a] = res; return a; }
public static FuncRec<T, R> Break<T, R>(this FuncRec<T, R> a, R res) { _brr[a] = res; return a; }
public static FuncRec<T1, T2, R> Break<T1, T2, R>(this FuncRec<T1, T2, R> a, R res) { _brr[a] = res; return a; }

public static Func<R> Fix<R>(this Func<FuncRec<R>, Func<FuncRec<R>>> f)
{
return () =>
{
object res_;
FuncRec<R> a = null;
for (a = () => a; !_brr.TryGetValue(a, out res_); a = f(a)())
;
var res = (R)res_;
_brr.Remove(a);
return res;
};
}

public static Func<T, R> Fix<T, R>(this Func<FuncRec<T, R>, Func<T, FuncRec<T, R>>> f)
{
return t =>
{
object res_;
FuncRec<T, R> a = null;
for (a = t_ => { t = t_; return a; }; !_brr.TryGetValue(a, out res_); a = f(a)(t))
;
var res = (R)res_;
_brr.Remove(a);
return res;
};
}

public static Func<T1, T2, R> Fix<T1, T2, R>(this Func<FuncRec<T1, T2, R>, Func<T1, T2, FuncRec<T1, T2, R>>> f)
{
return (t1, t2) =>
{
object res_;
FuncRec<T1, T2, R> a = null;
for (a = (t1_, t2_) => { t1 = t1_; t2 = t2_; return a; }; !_brr.TryGetValue(a, out res_); a = f(a)(t1, t2))
;
var res = (R)res_;
_brr.Remove(a);
return res;
};
}
}
}

Another sample illustrating the stack-friendly nature of the trampoline, is shown below:




Console.WriteLine("Forever! (CTRL-C to terminate)");
{
bool boom = false;
Console.CancelKeyPress += (s, e) =>
{
boom = true;
e.Cancel = true;
};


    Func<ActionRec, Func<ActionRec>> f = a => () =>
{
if (boom)
throw new Exception("Stack use is constant!");
return a();
};


    try
{
f.Fix()();
}
catch (Exception ex)
{
// Inspect stack trace here
Console.WriteLine(ex);
}
}

This function never returns unless you force it by pressing CTRL-C. At that point, you’ll see the exception’s stack trace being printed, revealing the constant stack space:




image


It also illustrates how the trampoline is sandwiched between our call to the recursive function (f.Fix()()) and the callback to the code we wrote (f).



 


Homework



The reader is invited to think about realizing mutual recursion in a stack-friendly way. For example, the sample below is an F# mutual recursive set of two functions used to determine whether a number is odd or even:




let rec isEven n =

  if n = 0 then


    true


  else


    isOdd (n - 1)


and isOdd n =


  if n = 0 then


    false


  else


    isEven (n - 1)


Its use is shown below:




image


In fact, the F# implementation generates mutually recursive calls here, but in a stack-friendly way by using tail calls (only shown for the isEven function below, but similar for isOdd):




image


Tail calls reuse the current stack frame, therefore not exhausting the stack upon recursion. The same can be achieved by means of a trampoline if you’re brave enough to give it a try. Hint: notice how mutually recursive functions in F# are subtly bundled by means of an “and” keyword.



As an additional piece of homework, think about ways we could use a trampoline to call functions that still return a useful value, to be used in code after the call returns. As an example, consider the classic definition of factorial:



int fac(int n)


{


    return n == 0 ? 1 : n * fac(n – 1);


}

How would you realize exactly the code above, using trampolines and whatnot, without exhausting call stack space? Recall the problem with the above is the fact we need to do a multiplication after the recursive fac call returns. Hint: think of continuations and maybe even the typical “von Neumann machine trade-off” between code (CPU) and data (memory).



Happy jumping!

 B# .NET Blog News Feed 

Last edited Dec 7, 2006 at 11:16 PM by codeplexadmin, version 1

Comments

No comments yet.