<t>Ok, 3rd answers a charm?<br/>
<br/>
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx<br/>
<br/>
A couple things - the storage emulator - from a friend that did some serious digging into it.<br/>
<br/>
"Everything is hitting a single table in a single database (more partitions doesn't affect anything). Each table insert operation is at least 3 sql operations. Every batch is inside a transaction. Depending on transaction isolation level, those batches will have limited ability to execute in parallel.<br/>
<br/>
Serial batches should be faster than individual inserts due to sql server behavior. (Individual inserts are essentially little transactions that each flush to disk, while a real transaction flushes to disk as a group)."<br/>
<br/>
IE using multiple partitions dosen't affect performance on the emulator while it does against real azure storage.<br/>
<br/>
Also enable logging and check your logs a little - c:\users\username\appdata\local\developmentstorage<br/>
<br/>
Batch size of 100 seems to offer the best real performance, turn off naggle, turn off expect 100, beef up the connection limit. <br/>
<br/>
Also make damn sure you are not accidentally inserting duplicates, that will cause an error and slow everything way way way down.<br/>
<br/>
and test against real storage. There's a pretty decent library out there that handles most of this for you - http://www.nuget.org/packages/WindowsAzure.StorageExtensions/, just make sure you actually call ToList on the adds and such as it won't really execute till enumerated. Also that library uses dynamictableentity and thus there's a small perf hit for the serialization, but it does allow you to use pure POCO objects with no TableEntity stuff. <br/>
<br/>
~ JT</t>