Tuesday, August 17, 2010

Visual Studio 2010 COM-Interface Bugfest

"EnvDTE is an assembly-wrapped COM library containing the objects and members for Visual Studio core automation."

But unfortunately the implementation the the VS 2010 RTM version the library is pretty much unusable. Some methods like EnvDTE.Project.Delete() are not even implemented.


Others like EnvDTE._Solution.Remove(Project project) have buggy implementations. Removing a project with _Solution.Remove(Project project) will work, but when you try to add the same project from a different location using the _Solution.AddFromFile method you will end up with the error message "A project with that name is already opened in the solution". Once you removed it you have to restart Visual Studio if you ever want to be able to add the project to the solution.

As we speak the Visual Studio 2010 automation COM-Interface is not usable.

Friday, May 7, 2010

Workflow 4 Bugfest - no compensations

According to msdn

"Compensation in Windows Workflow Foundation (WF) is the mechanism by which previously completed work can be undone or compensated (following the logic defined by the application) when a subsequent failure occurs. This section describes how to use compensation in workflows."

Ok. Lets start with this very simple example:


But when you execute it you will find that...

Pufffff, it's gone!

None of our handlers is ever called. Our transaction is down the drain.

Hmm, maybe the transaction was not successful and this is the reason why none of handlers got called.

Lets try this:

But when you execute it you will find that...

the last message is "Do Transaction" then

Pufffffs, it's gone.

But on the other hand what happens if we delete the throw?

Then we see that the confirmed handler gets called at the end of our workflow.

This means:

If we use CompensableActivity and everything in our workflow completes successfully, the transaction gets "autocommitted". Otherwise its puffff gone.

But this behavior can be changed from outside.

Lets wrap the first example into one of this mysterious Try/Catch Block and let him handle the exception.



What we now see that all of a sudden our CancellationHandler gets called.

I am confused now.

First thing is that I would never expect my Compensation to just puffff. I mean I do serious thing in a compensation. This makes especially no sense to me, if a successful Compensation auto-commits and the end of the workflow. An even less if failed compensation calls the cancellationhadler only if it is wrapped into a Try/Catch block that handles the exception.

Update: In the MSDN Forum Steve Danielson investigated the issue and gave an excellent hint on the difference between C# Try/Catch and Workflow Try/Catch. This covers the behavior for CompensableAction as well

Workflow 4 Bugfest - The Exception handling bug

I currently dig into Workflow 4 (Final). After Workflow 3 really drove me nuts with its unbearable performance I was first positively impressed by Workflow 4. The designer as well as the workflow execution speed is there where I expect it to be.
I downloaded the samples and went through the basic shapes section. All fine here.

But my impression is about to change, as Workflow 4 seems to contain a number of serious bugs.

This becomes very evident if you look at the execution of the finally block in a try/catch activity.

According to msdn...

"The finally block is useful for cleaning up any resources allocated in the try block as well as running any code that must execute even if there is an exception. Control is always passed to the finally block regardless of how the try block exits."


The most simple form of a try-Catch Activity is this:

Please note that there is no Exception handler in the "Catches" section. I simply want the Exception to propagate, but before it does that I want to make sure my code block in finally is executed no matter what happens.

But if you now execute the code, you will find out that the code in our finally block never gets executed....

This will not change if you add an exception handler, that does not handle the specific exception.

To make things worse we will see, that even if we handle the specific exception and and want to rethrow it, our finally is never executed.

Actually the only time the finally gets executed in case of an exception is when the Exception is handled and not propagated.

This means:

In Workflow 4 you cannot propagate an exception and execute a finally block.

In other words:

You cannot use a finally block.

Update: In the MSDN Forum Steve Danielson gave an excellent hint on the difference between C# Try/Catch and Workflow Try/Catch.

Friday, April 16, 2010

The WCF Data Services Toolkit mess

Related to custom data providers for WCF Data Services I am just looking at the Data Service Provider toolkit and I notice several things.

1.) You obviously need tons of glue code to expose the most simple classes like.

public Class B
{
public string NoIdProperty {get;set;}
}

public Class A
{
public B MyB { get;set; }
public int MyIDProperty { get; set; }
}

2.) The whole System.Data.Services.Providers Namespace is as good as undocumented. At what point in time/in which order the fuctions in IDataServiceQueryProvider are called? Only reverse engineering or many breakpoints/stepping through will tell you.

3.) The whole namespace smells like entity framework to me (e.g. looking at the ResourceTypeKind enum).

4.) The whole data services error reporting/debugging is not even close to real life requirements. I definitely need more details that just "Something blew up somewhere". I need something like the WCF tracing facilities before I can ever think about getting a Data Service into a production environment.

5.) The toolkit code is a mess.
I only looked at the RO examples, but just looking at the typed and untyped examples you find many differences in the glue code. They do really questionable things like storing their own ResourceTypeAnnotions types in the CustomState properties of the that is obviously needed to store a missing link from the ResourceTypeRessourceTypes to the RessourceSets.
They do really messy things like in GetQueryRootForResourceSet(ResourceSet) function (typed/ro example) that are all but intuitive.
They need a whole bunch of helper and extensions classes (glue), only to make your own interfaces somehow usable.

In short: the Custom Data Service Providers interface feels like it was not designed for what we are trying to do here.

I really love the idea of WCF Data Services that expose objects, because they close a significant gap in current data access architecture.

Maybe I did not get the concept of custom data service providers or I am just too stupid, but so far it looks the current ADO .NET Data Service implementation will never match my expectations.

Maybe Microsoft should consider dumping the current implementation and start a rewrite that is designed for exposing objects over WCF, instead of just entity framework over HTTP. Trying to fix the current code seems to be a big waste of their and my time. But at least Microsoft should do their homework in regards to documentation before thinking about extensions.

Wednesday, February 24, 2010

How many clicks I need to delete a file owned by "TrustedInstaller"

If you have the problem with this crap Vista based Windows Server 2008 that you want to delete a file owned by "TrustedInstaller", how many clicks do you need?

1.) Right Click on the file
2.) Left Click on "Properties"
3.) Left Click "Security Tab"
4.) Left Click "Advanced" Button
5.) Left Click "Owner" Tab
6.) Left Click "Edit" Button
7.) Left Click UAC "Continue" Button
8.) Left Click "Administrators"
9.) Left Click "OK" Button
10.) Left Click "OK" Button of Security Dialog
11.) Left Click "OK" Button
12.) Left Click "Edit" Button
13.) Left Click UAC "Continue" Button
14.) Left Click "Administrators" Button
15.) Left Click "Full Control" Checkbox
16.) Left Click "OK" Button
17.) Left Click "OK" Button
18.) Right Click "File"
19.) Left Click "Delete"
20.) Left Click "Continue" Button
21.) Left Click UAC "Continue" Button

And you need to do this for every single file.

Are you fucking kidding me?

Thanks god
there are a few commandline utilities that do the job just as expected:

start an elevated "cmd"

Use:

takeown /f <FileOrDirectory> /r /a
icacls <FileOrDirectory> /grant administrators:F /t

Tuesday, February 16, 2010

XSD to NHibernate-WCF contract converter

Guess you want to generate a WCF Data Contract from an XSD. In this case you use svcutil with /dconly option. This tool is based on
XsdDataContractImporter, so it can convert XSD files the same way, as svcutil does it.

But what do you do if you want to persist your object with NHibernate? You cannot do this with the contract generated from svcutil. The core of the problem is, that though you can switch the collection type with the /ct: options to IList. the generated data contract will not build. Second issue is that all the properties are generated public but not virtual public as we need this for NHibernate.

As svcutil is basically a wrapper around framework classes, I slapped together a small custom XSD to Class converter that generates classes that can be directly filled with NHibernate. All the customization is encapsulated in xsdconverter.cs. So feel free to change the frontend (WPF here) to whatever you like.

90% of the code is dedicated to another feature of the generator. It is capable of generating code comments from your <xs:annotations> elements. This code is based on the code released by Xsd2Code project.

You can get it here.

Due to some unlucky limitations of System.Runtime.Serialization (see comments) using the NHibernate option the serialized object may not be valid against the schema. As long as you do not require valid XML against your schema (and you probably won't) this restriction does not matter though.

Monday, February 15, 2010

Enjoy the WCF Data Services sandwich

For 60 years in IT, there are lot of the ever repeating questions in every project.

Among them:

How do I load/persist my object in my relational database?

Who hopes Microsoft is finally giving the the answer with Entity Framework 4 will be disappointed. EF 4 is nailed on my SQL-Server relational tables. But what's a data-access layer worth that exports relational tables? Well, it's simply not what I call a data-access layer. It is a relational table presentation layer, but nothing more. For me a good data-access layer returns at least data transfer objects, that totally abstract the physical storage. Given that fact that I don't like the idea of stacking up a table presentation layer with my own projection framework, EF4 can't be part of any serious architecture I use.

But what about WCF Data Services?

Among the ever returning questions in IT there is also the following question.
How can I forward the query from my client to my object repository if I want it to?

The problem with DDD repositories, is that you basically need the exact number of functions as the number of query possibilities you want to have.

Following example:

public class A
{
public string PropA { get;set; };
public string PropB { get;set; };
public string PropC { get;set; };
}

How many operations you want to implement until you get the flexibility you need?

Maybe

GetAByPropA(string particle);

GetAByPropB(string particle);
GetAByPropC(string particle);
GetAByPropAorB(string particle,string particleB);
GetAByPropAandB(string particle, string particle);
GetAByPropAorC(string particle, string particleB);

Of course you would not implement

GetAByPropAorB(string particle,string particleB);

but maybe

GetAByQuery(List<string>, List<Operator>)

But what are the Operators....

You have 2 options now.

1.) You accept that you cannot implement all the operations you basically need. That means to accept the gap. That means you need to do client side filtering. That means your data-access layer puts much more load on the whole system than necessary
2.) you find yourself writing your own query language.

So either I take the ugly 90% solution or I waste my project time writing a query language instead of implementing business cases. From my point of view this sounds like the choice between black death and cholera. Basically this is the moment to realize we have just broken through the ceiling of current SOA patterns.

But wait. Microsoft promises me they have the answer. It's called WCF Data Services!

What are WCF Data Services related to classic WCF services?

In classic WCF you expose operations. In WCF Data Services you expose objects. WCF Data Services come with a built in functionality to perform queries on a collection of objects exposed through the service.

In .Net terms this means, I can perform a Linq query on the collection of my client proxy object. The client serializes my query through OData protocol, which is then deserialized on the server to a Linq expression again. And with my Linq expression I can basically forward the query directly to my data store. This means I retrieve only the exact data requested and return just the amount of data necessary.

Awesome! All I do is to expose my object, and this is it! Bye 100 insuffient operations, bye you hateful custom query language.

I watched the videos, everything looks absolutely idiot proof. Lets try it.

I tried it on .Net 3.5 getting the latest update, as well on .Net 4 with Visual Studio 2010 RC.

But as I do not like Entity Framework I used my really simply WCF contract. I exposed it through my Data Service and....

"Request Error. The server encountered an error processing the request. See server logs for more details."

Who knows where there is the log for the ASP.Net development server?

To save you the same amount of frustration I recommend you slap the following attribute on your DataService class.

[ServiceBehavior(IncludeExceptionDetailInFaults = true)]

In .Net 3.5 the error message you now get is even less on the first glance.

"The XML page cannot be displayed"

To save you some more frustration I recommend to use FireFox as your default browser. In case of invalid XML IE simply does let you view the source of the document. Although FireFox comes up with a similar "XML processing error" message, you can at least always show the source of the page and sometimes this will get you to the source of the problem.

If you get meaningful exceptions like

System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke ....

this probably means the same as the the error message:

"MyType is not entity type"

? What does that mean?

With WCF Data Services you just cannot expose WCF data contracts. Basically WCF Data Services are missing WCF compatibility. Well.....

You can only expose objects that have either:

a public set & gettable property: ID (yes, case sensitive)
or
a public set & gettable property: <classname>ID
or are decorated with the
[DataServiceKeyAttribute]

What does that mean?

public class MyClass
{
public string Name { get; set;}
}

Not working. And be careful with just slapping [DataServiceKeyAttribute("Name")] on your class. As Name is not a unique identifier of your class you may see some unexpected results if you query provider return 2 objects with the same name. Make sure you test this choice.

public enum myEnum
{
one,
two
}

public class MyClass
{
public string ID { get; set;}
public myEnum MyEnum { get; set;}
}

not working

public class MyOtherClass
{
public string Name { get; set;}
}


public class MyClass
{
public string ID { get; set;}
public MyOtherClass MyOtherClass { get; set;}
}

not working as B has no unique identifier on its own.

It looks like we have to approach the question from the other side. So what is working?

Well, basically if you want to transport your own classes not much. If you have no access to source of the class basically nothing. But I have my WCF data contracts! What happened?

If you look the Microsoft Pod casts on the subject you see that most their examples are based on the Entity Framework. And if you look closer you find all the limitations in Entity Framework currently are found in Data Services as well. In other words Entity Framework are perfect partners. But this in fact means that both technologies are quite tightly coupled. This makes sense if you consider that WCF Data Services were called ADO.Net Data Services before. The original design was obviously not for exposing object but database entities. For me it feels the
whole concept was originally intended to give Silverlight developers access to a remote SQL-Server database because Silverlight runs in sand boxed context of the browser behind a firewall.

But in fact this is what you get.

So what basically happened is that they nailed the Data Services on Entity Framework which is nailed on relational SQL Server tables. In short: WCF Data Services are intended to provide a web based access to relational SQL server tables.

Now you tell me something about decoupled architecture..... Basically amazing how you manage to nail not only some software components but 3 complete technology frameworks together.

If you try to expose your object or only your WCF data contract through WCF Data Services, you probably will not recognize your object anymore when your implementation finally suffices WCF Data Services.

At the end the result is more than disappointing. If you do not like the SQL-Server, Entity Framework, Data Services Sandwich you can put this technologies aside and wait for Microsoft to publish something more generally usable. Maybe some when somewhere on CodePlex?

More and more it seems the whole .Net 4 release seems to be full set of half ready technologies. WF4 is missing State Machines, the ORM Entity Framework 4 is missing (o)bjects and (m)appings, WCF Data Services are missing WCF, industry standards like XSLT 2 are completely missing. Maybe it's time for Microsoft to rename .Net 4 to .Net 3.75. For me it looks like it looks like Microsoft lost sight in the Cloud.

Please correct me if I got something completely wrong. I am more than happy to correct my findings. Please use the comment function for this.

Update: With the move from ADO.Net Data Services to WCF Data Services Microsoft obviously realized that the current implementation does currently not fulfill the expectations implied by the term WCF.
That is why they come up with a custom Data Service Provider Interface that allows you to plug in new models into WCF Data Services.

Entity Framework 4 - The light of .Net ORM dawn?

For 60 years in IT, there are lot of the ever repeating questions in every project.
Among them:

In what format I do send my data to the consuming system?
Well, finally we solved this.... That's XML. Checked.

How do I get my data to the consuming system?
Well, that's WCF for all .NET folks. Checked.

How do I load/persist my object in my relational database?
Well, hmm, uhhh.

I promise the number of different relational data access/persistence implementations in .Net are only little lower than the number of .Net apps running on this planet using that technology. How we do data-access is a question of the style of each developer and thus dependent on the person not the technology. This of course is additional risk to every .Net project at a quite low level.
Compared to Java, in the area of ORM .NET seems to be light years behind. What Microsoft shipped as Entity-Framework 1.0 was as good as the WF-WCF 3 integration or in other words.... simply beyond every comment. But with is the upcoming .Net 4 ships the new Entity-Framework 4. So is there finally the light of dawn coming into the .Net ORM world?

I tried it and the result was bringing me back to earth quickly. Guess you have

class B
{
public string Id { get;set }
}

class A
{
public string Name { get;set }
public List<B> Bs { get;set;
}

and you want to map this to a database you find out that...

you cannot do that with the standard approach of Entity-Framework 4. Here you can only move a property from entity A to B if there is a 1 : 1 relationship between those tables.

Out of the box EF4 is just a lazyload DataSet 2.0 with autogenerated query,update, insert commands. But it is nailed on my relational tables. The other problem I see here is that out of the box EF4 currently supports SQL-Server only. If you want other database support you need 3rd-party provider.

From an architecture point of view this is good enough for "Click and Shoot" applications, but not for serious applications that require abstraction of the data from the underlying storage.

If you want to use EF4 in this scenarios, you need to switch to POCO mode. It looks like you have to sacrifice most of the the tools and code generation support though. There are some POCO templates for code generation, but if I look at the known defects list, this whole thing does not look quite production ready. It looks like the EF evolution from DataSet 2.0 to an serious ORM tool is ongoing, but I would not make a bet it happens with this release. So hopefully with EF 5 we can expect some more mature solution here.

Meanwhile related to true ORM instead in the light of dawn we have to walk in utter darkness?

No.

I looked into NHibernate lately and although it does not completely meet my (too high?) expectations of an *Object* Relational Mapper, you can do a lot more things with it than you can do with EF4. It really feels like a mature ORM framework to me.

So I cannot map the exact above example but at least I can map what gets me at least to the same usability I intended.

class B
{
public virtual string Id { get;set }}
}

class A
{
public virtual string Name { get;set }
public virtual IList<B> Bs { get;set; }
}


This is pretty much a basic problem for NHibernate. But you can solve a lot more problems with it, like inheritance, cyclic references, Write/Read batching (in other words performance tuning),it supports lazy load with different proxies...
There is a (not quite impartial) comparison of Entity Framework 4 vs NHibernate. The comparison ends with the pro's of EF4 being "A better Linq provider and it's from Microsoft".
The downside of NH is of course, that is not just "Import tables" click and shoot. As you have some choice on the mappings you have more complexity and more handwriting and configuration.

Maybe I have to give up my idea of *the* ORM. It seems you have to forget about the "O". The best that can be done is an "E" or "Entity" Relational Mapper, where "Entity" is a technology related subset of "Object". And the the bigger your subset is the better is your ERM tool.

Tuesday, January 26, 2010

Do not use Windows Backup!

At the time we speak this application is FUBAR, or in other words "fucked up beyond all repair".

Why?

  • I never saw a completely successful backup, neither on my Vista Box nor on my Windows 7 box.
  • You only waste hours or days running backups that are never successful, scanning your disk with chkdsk of sfc, searching the net, searching the registry, screwing up your systems registry, or formatting your disk.
  • The errors are just cryptic hexadecimal numbers like 0x80070057, 0x80070570 without any reference to the cause of the error.

When I first started Windows Backup on Windows 7 I kept getting error 0x800700002 which quite obviously means, that a registry key named ProfileImagePath below HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList was pointing to a directory %systemroot%\system32\config\systemprofile that did not exist on my machine. In this case I could convince Windows Backup to go on after I manually created this directory for Windows Backup.
That was it? No. After that I got some obvious 0x80070570 followed of course by a 0x8007000D and on my Vista Box all I naturally get is 0x80070079. In the meantime I ran hours of chkdsk /r or sfc /scannow. In the end I assumed that maybe my backup hard drive had some kind of problem and reformatted it.

What I did not know at this point was that this meant the end for my "Windows Backup" on my machine. When I now start Windows Backup all I get is 0x80070057:



It took Microsoft half a year of investigation to find out that this error in fact is related to some problems they have with non-US locale settings.

The closest I ever got was a single unreproducible backup that ended "successfully with errors". But when I checked the log I found this error message:

Error while backing up "C:\Windows\System32\config\systemprofile\myFolder". The system cannot find the file specified (0x80070002))

The surprising part is, that this folder never existed and never will exist on my system. This raises the question, why a *backup* program starts seeing ghost files, which of course it later fails to find??

It also raises the question, what specifications, development and quality assurance process they have at Microsoft, that allows them to ship *twice* a solution that throws hexadecimal errors messages in a users face? From a software engineering perspective this is bare amateur level. If Microsoft wants to know, why the people love Apple, then here you got the answer. They would not ship crap like this once.

The reason could be simple. Because it feels like the core of Windows Backup is some component Microsoft bought and then tried to cram somehow into Windows. I would not wonder if there is just some sort of (former) third party backup.exe running under the hood.

As we speak of backup, means the security of your data, using Windows Backup you must ask yourself the following questions:

Do you really want to trust your data to a backup program
  • that is incapable of giving a useful error message?
  • that needs folders to be manually created to work at all?
  • that tries to backup files that do not exist?
  • that is dependent on US locale settings?
  • that even after the Vista desaster, Microsoft itself was unable to fix in the 3 years time until Windows 7?
So the conclusion is simple:

If you love your data,
do not use Windows Backup!

Tuesday, January 19, 2010

How to pass arrays from .Net C# to Oracle

Guess you have the following query

select * from table where table.id in (:MyIDList)

and you want to pass a number of IDs in the binding variable :MyIDList from C# to your Oracle SQL. If you now check the types of binding variables available for this job, you find out that this Oracle only supports scalar types as binding variables.
If you now dig further you find a couple of bright solution that are all based on splitting up some string (varchar2) into several elements. But due to the fact, that the length of varchar2 binding variables is limited to 4/32K SQL/PLSQL this is not a scalable solution to the problem.

Even if you think you are smart, and you try a string replace of :MyIDList with the elements like 'a','b','c' let me assure you that the limit for written elements in an in statement is 1000. Also your performance will degrade significantly, as the query cache won't recognize the SQL as being executed before and therefore Oracle has to recompile it with every execution.

Is it impossible to pass an variable length array to SQL?

Let me put it straight!
Yes and No.

In pure SQL it is impossible to pass a variable length array to SQL.

But in PL/SQL it is not.

"Yes great", you think, "but I need it in SQL!"

The trick is a PL/SQL wrapper to SQL!

Simply use the following ingredients:

Define 2 global types (and don't try to be smart here. We need IDType for couple of reasons):

CREATE OR REPLACE
type IDTYPE as object (
id varchar2(20)
);

CREATE OR REPLACE
type IDTABLETYPE as table of IDType;

In PL/SQL now create a package

CREATE OR REPLACE PACKAGE MYPACKAGE
as

type stringTableType is table of varchar2(20) index by binary_integer;

procedure GetMyTableByIDs
(
p_MyIDList IN stringTableType,
p_outRefCursor out RefCursorType
);

end;

CREATE OR REPLACE PACKAGE BODY MYPACKAGE
as

TYPE RefCursorType IS REF CURSOR;
procedure GetMyTableByIDs
(
p_MyIDList IN stringTableType,
p_outRefCursor out RefCursorType
)
as
iMyIDList IDTableType;
begin

iMyIDList := IDTableType();
iMyIDList.Extend(p_MyIDList.count);

for i in p_MyIDList.First .. p_MyIDList.Last
loop
iMyIDList(i) := IDType(p_MyIDList(i));
end loop;

open p_outRefCursor
for
select * from table where table.id in (select id from table(iMyIDList));

end GetMyTableByIDs;

end;

What is going on here?

First thing you notice is that we have 2 very similar array (table) types.

Globally we defined the

type IDTABLETYPE as table of IDType -- IDType is varchar2(20)

This is in Oracle terms a "Nested Table". This type is available in SQL and PL/SQL. IDType brings the property "ID" of type varchar2(20).
In PL/SQL we defined the very similar type:

type stringTableType is table of varchar2(20) index by binary_integer;

This is an "index by table" or "associative array" in oracle terms. Associative arrays are better understood as "HashTable" and are available in PL/SQL only. For a more detailed explanation of the differences please have a look at "Collection Types in PL/SQL".

But why do you copy the arrays one by one?

Because you now see that Oracle has obviously 2 different development units for SQL and PL/SQL. And they do not seem to talk very much together.

The result of 3 days in short:
  • There is no way to pass a nested table as parameter to a stored procedure in C#
  • There is no way to use a associative array in SQL
  • There is no way to assign/initialize a nested table to/with an associative array

Great, but how do we use it in C#?

OracleConnection conn = new OracleConnection("MyConnectionString");
OracleCommand cmd = conn.CreateCommand();
cmd.CommandText = "MyPackage.GetMyTableByIDs";
cmd.CommandType = CommandType.StoredProcedure;
cmd.BindByName = true;

cmd.Parameters.Add(new OracleParameter("p_outRefCursor", OracleDbType.RefCursor)).Direction = ParameterDirection.Output;

cmd.Parameters.Add(new OracleParameter("p_MyIDList", OracleDbType.Varchar2)
{
CollectionType = OracleCollectionType.PLSQLAssociativeArray,
Value = my_list_with_ids.ToArray()
}

);

da = new OracleDataAdapter(cmd);

da.Fill(myDataSet);

It's not working!

I can't find neither

cmd.BindByName = true;

nor

OracleCollectionType.PLSQLAssociativeArray

The constraint of the solution is that you have to use Oracle .NET driver (Oracle.DataAccess) instead of the the Microsoft (System.Data.OracleClient) driver. But with .Net 4 the Microsoft's Oracle driver is marked deprecated anyway. So get used to it.

This is it. The only way to pass an array to a SQL.

"Holy shivers!" you think. This is a lot of glue for a simple task like this!

Basically yes, BUT
You can do 3 things now:
1.) I use a macro that converts my SQL into PL/SQL by automatically replacing the binding variable :MyIDList with the PL/SQL Parameter p_myIDList.
2.) You can tune your performance significantly by rewriting your SQL
3.) You can clean up your code a lot by using default values

Improve overall performance

Our former SQL

select * from table where table.id in (select id from table(iMyIDList))

becomes unbearable slow with a large number of ID's and lines in table. What you can do now is to rewrite our SQL to

select * from table where table.id
join (select id from table(iMyIDList)) IdFilter on table.id = IdFilter.id

And if you want to develop in SQL and simply convert it with our macro you can add the following function to your package:

FUNCTION GetDefaultTable
(
param varchar2
)
RETURN IDTableType
is
begin
return IDTableType(IDType(param));
end;

and rewrite your regular SQL to:

select * from table where table.id
join (select id from table(MyPackage.GetDefaultTable(:MyIDList)) IdFilter
on table.id = IdFilter.id

Use PL/SQL defaults:

A feature of our PL/SQL is that you can define default values for all parameters.
In a simple case this is:

procedure GetSomeThing
(
p_param1 in varchar2 default 'SomeDefaultValue',
);

but what do we do with our associative array?

The fancy part about default parameters is that the value can be a call to function.....

So

procedure GetSomeThing
(
p_MyIDList IN stringTableType default GetDefaultTable('DefaultValueForElement'),
p_param1 in varchar2 default 'SomeDefaultValue',
p_outRefCursor out RefCursorType
);

works nice and easy.

in combination with the line in C#

cmd.BindByName = true;

what you can do now is only pass parameters that differ from their defaults what can be used to write a much nicer code. Instead of passing all the parameters defined a for each procedure defined (results in a clunky piece of code), you just just set the parameter for any stored procedure if the associated value is non-default.