Saturday, January 9, 2010

ASP.NET 4 SEO Improvements (VS 2010 and .NET 4.0 Series)

ASP.NET 4 SEO Improvements (VS 2010 and .NET 4.0 Series): "

[In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

This is the thirteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s post covers some of the improvements being made around Search Engine Optimization (SEO) with ASP.NET 4.

Why SEO?

Search engine optimization (SEO) is important for any publically facing web-site.  A large percentage of traffic to sites now comes from search engines, and improving the search relevancy of your site will lead to more user traffic to your site from search engine queries (which can directly or indirectly increase the revenue you make through your site).

Measuring the SEO of your website with the SEO Toolkit

Last month I blogged about the free SEO Toolkit we’ve shipped that you can use to analyze your site for SEO correctness, and which provides detailed suggestions on any SEO issues it finds. 

I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in the site, and pinpoint ways to optimize it further.

ASP.NET 4 SEO Improvements

ASP.NET 4 includes a bunch of new runtime features that can help you to further optimize your site for SEO.  Some of these new features include:

  • New Page.MetaKeywords and Page.MetaDescription properties
  • New URL Routing support for ASP.NET Web Forms
  • New Response.RedirectPermanent() method

Below are details about how you can take advantage of them to further improve your search engine relevancy.

Page.MetaKeywords and Page.MetaDescription properties

One simple recommendation to improve the search relevancy of pages is to make sure you always output relevant “keywords” and “description” <meta> tags within the <head> section of your HTML.  For example:

image

One of the nice improvements with ASP.NET 4 Web Forms is the addition of two new properties to the Page class: MetaKeywords and MetaDescription that make programmatically setting these values within your code-behind classes much easier and cleaner. 

ASP.NET 4’s <head> server control now looks at these values and will use them when outputting the <head> section of pages.  This behavior is particularly useful for scenarios where you are using master-pages within your site – and the <head> section ends up being in a .master file that is separate from the .aspx file that contains the page specific content.  You can now set the new MetaKeywords and MetaDescription properties in the .aspx page and have their values automatically rendered by the <head> control within the master page.

Below is a simple code snippet that demonstrates setting these properties programmatically within a Page_Load() event handler:

image

In addition to setting the Keywords and Description properties programmatically in your code-behind, you can also now declaratively set them within the @Page directive at the top of .aspx pages.  The below snippet demonstrates how to-do this:

image

As you’d probably expect, if you set the values programmatically they will override any values declaratively set in either the <head> section or the via the @Page attribute. 

URL Routing with ASP.NET Web Forms

URL routing was a capability we first introduced with ASP.NET 3.5 SP1, and which is already used within ASP.NET MVC applications to expose clean, SEO-friendly “web 2.0” URLs.  URL routing lets you configure an application to accept request URLs that do not map to physical files. Instead, you can use routing to define URLs that are semantically meaningful to users and that can help with search-engine optimization (SEO).

For example, the URL for a traditional page that displays product categories might look like below:

http://www.mysite.com/products.aspx?category=software

Using the URL routing engine in ASP.NET 4 you can now configure the application to accept the following URL instead to render the same information:

http://www.mysite.com/products/software

With ASP.NET 4.0, URLs like above can now be mapped to both ASP.NET MVC Controller classes, as well as ASP.NET Web Forms based pages.  You can even have a single application that contains both Web Forms and MVC Controllers, and use a single set of routing rules to map URLs between them.

Please read my previous URL Routing with ASP.NET 4 Web Forms blog post to learn more about how the new URL Routing features in ASP.NET 4 support Web Forms based pages.

Response.RedirectPermanent() Method

It is pretty common within web applications to move pages and other content around over time, which can lead to an accumulation of stale links in search engines.

In ASP.NET, developers have often handled requests to old URLs by using the Response.Redirect() method to programmatically forward a request to the new URL.  However, what many developers don’t realize is that the Response.Redirect() method issues an HTTP 302 Found (temporary redirect) response, which results in an extra HTTP round trip when users attempt to access the old URLs.  Search engines typically will not follow across multiple redirection hops – which means using a temporary redirect can negatively impact your page ranking.  You can use the SEO Toolkit to identify places within a site where you might have this issue.

ASP.NET 4 introduces a new Response.RedirectPermanent(string url) helper method that can be used to perform a redirect using an HTTP 301 (moved permanently) response.  This will cause search engines and other user agents that recognize permanent redirects to store and use the new URL that is associated with the content.  This will enable your content to be indexed and your search engine page ranking to improve.

Below is an example of using the new Response.RedirectPermanent() method to redirect to a specific URL:

image

ASP.NET 4 also introduces new Response.RedirectToRoute(string routeName) and Response.RedirectToRoutePermanent(string routeName) helper methods that can be used to redirect users using either a temporary or permanent redirect using the URL routing engine.  The code snippets below demonstrate how to issue temporary and permanent redirects to named routes (that take a category parameter) registered with the URL routing system.

image

You can use the above routes and methods for both ASP.NET Web Forms and ASP.NET MVC based URLs.

Summary

ASP.NET 4 includes a bunch of feature improvements that make it easier to build public facing sites that have great SEO.  When combined with the SEO Toolkit, you should be able to use these features to increase user traffic to your site – and hopefully increase the direct or indirect revenue you make from them.

Hope this helps,

Scott

"

New features of C# 4.0

New features of C# 4.0: "This article covers New features of C# 4.0. Article has been divided into below sections.

Introduction.
Dynamic Lookup.
Named and Optional Arguments.
Features for COM interop.
Variance.
Relationship with Visual Basic.
Resources.

Other interested readings…
22 New Features of Visual Studio 2008 for .NET Professionals
50 New Features of SQL Server 2008
IIS 7.0 New features

Introduction

It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them.

Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out.

Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release.

The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios.

C# 4.0

The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include

a. objects from dynamic programming languages, such as Python or Ruby

b. COM objects accessed through IDispatch

c. ordinary .NET types accessed through reflection

d. objects with changing structure, such as HTML DOM objects

While C# remains a statically typed language, we aim to vastly improve the interaction with such objects.

A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set.

The new features in C# 4.0 fall into four groups:

Dynamic lookup

Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime.

Named and optional parameters

Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position.

COM specific interop features

Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience.

Variance

It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that.

Dynamic Lookup

Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object.

This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call.

The dynamic type

C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime:

dynamic d = GetDynamicObject(…);
d.M(7);

The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it.

The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs:

dynamic d = 7; // implicit conversion
int i = d; // assignment conversion

Dynamic operations

Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically:

dynamic d = GetDynamicObject(…);
d.M(7); // calling methods
d.f = d.P; // getting and settings fields and properties
d[“one”] = d[“two”]; // getting and setting thorugh indexers
int i = d + 3; // calling operators
string s = d(5,7); // invoking as a delegate

The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime.

The result of any dynamic operation is itself of type dynamic.

Runtime lookup

At runtime a dynamic operation is dispatched according to the nature of its target object d:

COM objects

If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties.

Dynamic objects

If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax.

Plain objects

Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler.

Example

Assume the following code:

dynamic d1 = new Foo();
dynamic d2 = new Bar();
string s;

d1.M(s, d2, 3, null);

Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to:

“Perform an instance method call of M with the following arguments:

1. a string

2. a dynamic

3. a literal int 3

4. a literal object null”

At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows:

1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2.

2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics.

3. If the method is found it is invoked; otherwise a runtime exception is thrown.

Overload resolution with dynamic arguments

Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic:

Foo foo = new Foo();
dynamic d = new Bar();

var result = foo.M(d);

The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic.

The Dynamic Language Runtime

An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0.

The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch.

To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain.

Open issues

There are a few limitations and things that might work differently than you would expect.

· The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this.

· Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload.

· Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to.

One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects:

dynamic collection = …;

var result = collection.Select(e => e + 5);

If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation.

There are no plans to address these limitations in C# 4.0.

Named and Optional Arguments

Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list.

Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted.

Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations.

Optional parameters

A parameter is declared optional simply by providing a default value for it:

public void M(int x, int y = 5, int z = 7);

Here y and z are optional parameters and can be omitted in calls:

M(1, 2, 3); // ordinary call of M
M(1, 2); // omitting z – equivalent to M(1, 2, 7)
M(1); // omitting both y and z – equivalent to M(1, 5, 7)

Named and optional arguments

C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write:

M(1, z: 3); // passing z by name

or

M(x: 1, z: 3); // passing both x and z by name

or even

M(z: 3, x: 1); // reversing the order of arguments

All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1.

Optional and named arguments can be used not only with methods but also with indexers and constructors.

Overload resolution

Named and optional arguments affect overload resolution, but the changes are relatively simple:

A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type.

Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes.

If two signatures are equally good, one that does not omit optional parameters is preferred.

M(string s, int i = 1);
M(object o);
M(int i, string s = “Hello”);
M(int i);

M(5);

Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int).

M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object.

Finally M(int) is better than M(int,string) because no optional arguments are omitted.

Thus the method that gets called is M(int).

Features for COM interop

Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0.

Dynamic import

Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance.

In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them.

This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say

excel.Cells[1, 1].Value = "Hello";

instead of

((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello";

and

Excel.Range range = excel.Cells[1, 1];

instead of

Excel.Range range = (Excel.Range)excel.Cells[1, 1];

Compiling without PIAs

Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application.

The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded.

Omitting ref

Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters.

It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference.

Open issues

A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them.

There are currently no plans to address these remaining speed bumps in C# 4.0.

Variance

An aspect of generics that often comes across as surprising is that the following is illegal:

IList<string> strings = new List<string>();
IList<object> objects = strings;

The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write:

objects[0] = 5;
string s = strings[0];

Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety.

However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say:

IEnumerable<object> objects = strings;

There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work.

Covariance

In .NET 4.0 the IEnumerable<T> interface will be declared in the following way:

public interface IEnumerable<out T> : IEnumerable
{
IEnumerator<T> GetEnumerator();
}

public interface IEnumerator<out T> : IEnumerator
{
bool MoveNext();
T Current { get; }
}

The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B.

As a result, any sequence of strings is also e.g. a sequence of objects.

This is useful e.g. in many LINQ methods. Using the declarations above:

var result = strings.Union(objects); // succeeds with an IEnumerable<object>

This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type.

Contravariance

Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>:

public interface IComparer<in T>
{
public int Compare(T left, T right);
}

The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance.

A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types:

public delegate TResult Func<in TArg, out TResult>(TArg arg);

Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>.

Limitations

Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion.

Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types.

COM Example

Here is a larger Office automation example that shows many of the new C# features in action.

using System;
using System.Diagnostics;
using System.Linq;
using Excel = Microsoft.Office.Interop.Excel;
using Word = Microsoft.Office.Interop.Word;

class Program
{
static void Main(string[] args) {
var excel = new Excel.Application();
excel.Visible = true;

excel.Workbooks.Add(); // optional arguments omitted

excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically
excel.Cells[1, 2].Value = "Memory Usage"; // accessed

var processes = Process.GetProcesses()
.OrderByDescending(p =&gt; p.WorkingSet)
.Take(10);

int i = 2;
foreach (var p in processes) {
excel.Cells[i, 1].Value = p.ProcessName; // no casts
excel.Cells[i, 2].Value = p.WorkingSet; // no casts
i++;
}

Excel.Range range = excel.Cells[1, 1]; // no casts

Excel.Chart chart = excel.ActiveWorkbook.Charts.
Add(After: excel.ActiveSheet); // named and optional arguments

chart.ChartWizard(
Source: range.CurrentRegion,
Title: "Memory Usage in " + Environment.MachineName); //named+optional

chart.ChartStyle = 45;

chart.CopyPicture(Excel.XlPictureAppearance.xlScreen,
Excel.XlCopyPictureFormat.xlBitmap,
Excel.XlPictureAppearance.xlScreen);

var word = new Word.Application();
word.Visible = true;

word.Documents.Add(); // optional arguments

word.Selection.Paste();
}
}

The code is much more terse and readable than the C# 3.0 counterpart.

Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges.

Relationship with Visual Basic

A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic:

· Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#.

· Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind.

· NoPIA and variance are both being introduced to VB and C# at the same time.

VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone.

Resources

All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy!


"

Friday, January 8, 2010

Doing It Wrong

Doing It Wrong: "

Enterprise Systems, I mean. And not just a little bit,
either. Orders of magnitude wrong. Billions and billions of dollars worth of
wrong.
Hang-our-heads-in-shame wrong. It’s time to stop the madness.


These last five years at Sun, I’ve been lucky: I live in the Open-Source
and “Web 2.0” communities, and at the same time I’ve been given
significant quality
time with senior IT people among our Enterprise customers.


What I’m writing here is the single most important take-away from my Sun
years, and it fits in a sentence: The community of developers whose work you
see on the Web, who probably don’t know what ADO or UML or JPA even stand
for, deploy better systems at less cost in less
time
at lower risk than we see in the Enterprise. This is true even
when you factor in the greater flexibility and velocity of
startups.


This is unacceptable.
The Fortune 1,000 are bleeding money and missing huge opportunities to excel
and compete.
I’m not going to say that these are low-hanging fruit,
because if it were easy to bridge this gap, it’d have been bridged. But the
gap is so big, the rewards are so huge, that it’s time for some serious
bridge-building investment.
I don’t know what my future is right now, but this seems
by far the most important thing for my profession to be working on.


The Web These Days


It’s like this: The time between having an idea and its public launch
is measured in days not months, weeks not years. Same for
each subsequent release cycle. Teams are small. Progress is iterative. No
oceans are boiled, no monster requirements documents written.


And what do you get? Facebook. Google. Twitter. Ravelry. Basecamp.
TripIt. GitHub. And on and on and on.


Obviously, the technology matters. This isn’t the place for details, but
apparently the winning mix includes dynamic languages and Web frameworks and
TDD and REST and Open Source and NoSQL at varying levels of
relative importance.


More important is the culture: iterative development, continuous
refactoring, ubiquitous unit testing, starting small, gathering user
experience before it seems reasonable.
All of which, to be fair, I suppose had its roots
in last decade’s Extreme and Agile movements. I don’t hear a lot of talk
these days from anyone claiming to “do Extreme” or “be Agile”. But then, in
Web-land for damn sure I never hear any talk about large
fixed-in-advance specifications, or doing the UML first, or development cycles
longer than a single-digit number of weeks.


In The Enterprise


I’m not going to recite the
statistics
about the proportions of big projects that fail to work out, or flog moribund
horses like the
failed
FBI system
or Britain’s
monumentally-troubled
(to the tune of billions)
NHS
National Programme for IT
. For one thing, the litany of disasters in the
private sector is just as big in the aggregate and the batting average isn’t
much better; it’s just that businesses can sweep the ashes under the
carpet.


If you enjoy this sort of stuff, I recommend Michael Krigsman’s
IT Project Failures
column over at ZDNet. Also,
Bruce Webster is very good.
And for some more gloomy numbers, check out
The
CHAOS Report 2009 on IT Project Failure
.


Amusingly, all the IT types who write about this agree that the problem is
“excessive complexity”, whatever that means. Predictably, many
of them follow the trail-of-tears story with a pitch for their own
methodology, which they say will
fix the problem. And even if we therefore suspect them of cranking up the
gloom-&-doom knob a bit, the figures remain distressing.


So, what is to be done?


Plan A: Don’t Build Systems


The best thing, of course, is to simply not build your own systems. As
many in our industry have pointed out, perhaps most eloquently
Nicholas Carr,
everything would be better if we could do IT the way we do electricity; hook
up to the grid, let the IT utility run it all, and get billed per unit
of usage.


This is where all the people now running
around shouting “Cloud! Cloud! Cloud!” are trying to go. And it’s
where Salesforce.com, for example, already is.


If you must run systems in-house, don’t engineer them, get ’em pre-cooked
from Oracle or SAP or whoever. I can’t imagine any nonspecialist organization
for whom it would make sense to build an HR or accounting application from
scratch at this point in history.


Of course, we’re not in the Promised Land yet. I’m actually surprised that
Salesforce isn’t a lot bigger than it is; a variety of things are holding back
the migration to the utility model. Also, you hear tales of failed
implementations at the SAP and Oracle app-levels too, especially CRM. And
Oracle is well-known to be ferociously hard at work on a wholesale revision of
the app stack with the
Fusion
Applications
. But still, even if things aren’t perfect, nobody is
predicting a return to hand-crafted Purchasing or Travel-Expense
systems. Thank goodness.


But Sometimes You Have To


I don’t believe we’ll ever go to a pure-utility model for IT. Every
world-class business has some sort of core competence, and there are good
arguments that sometimes, you should implement your own systems around
yours.
My favorite example, of the ones I’ve seen over the past few years,
is the NASDAQ trading system, which handles a ridiculous number of
transactions in 6½ hours every trading day and pushes certain well-known
technologies to places that I’d have flatly sworn were impossible if I hadn’t
seen it.


Here’s a negative example: One of the world’s most ferocious competitive
landscapes is telecoms, which these days means mobile telecoms. One way a
telecom might
like to compete would be to provide a better customer experience: billing,
support, and so on. But to some degree they can’t, because many of them have
outsourced much of that stuff to
Amdocs.


Given all the colossal high-visibility failures like the ones I mentioned
earlier, what responsible telecom executive would authorize going ahead with
building an in-house alternative? But at some level that’s insane; if your
business is customer service, how can you bypass
an opportunity to compete by offering better customer service? The telecom
networks around where I live seem to put most of their strategic investments
into marketing, which is a bit sad.


Plan B: Do It Better


Here’s a thought experiment: Suppose you asked one of the
blue-suit
solution providers
to quote you on building Ravelry or
Twitter or Basecamp. What would the costs be like?
And how much confidence would you have in a good
result? Consider the same questions for a new mobile-network billing
system.


The point is that that kind of thing simply cannot be built if you
start with large formal specifications and fixed-price contracts and
change-control procedures and so on. So if your enterprise wants the sort of
outcomes we’re seeing on the Web
(and a lot more should), you’re going to have to adopt some of the
cultures and technologies that got them built.


It’s not going to be easy; Enterprise IT has spent decades growing a
defensive culture based on the premise that you only get noticed when you screw up,
so that must be avoided at all costs.


I’m not the only one thinking about how we can get Enterprise Systems
unjammed and make them once again part of the solution, not part of the
problem. It’s a good thing to be thinking about.


"